stop
stop
stop
Sure thing, but what is a monad anyway?
It's a monoid in the category of endofunctors. Obviously.
In practical terms, it's most commonly a code pattern where any function that interacts with something outside your code (database, filesystem, external API) is "given permission" so all the external interactions are accounted for. You have to pass around something like a permission to allow a function to interact with anything external. Kind of like dependency injection on steroids.
This allows the compiler to enhance the code in ways it otherwise couldn't. It also prevents many kinds of bugs. However, it's quite a bit of extra hassle, so it's frustrating if you're not used to it. The way you pass around the "permission" is unusual, so it gives a lot of people a headache at first.
This is also used for internal permissions like grabbing the first element of an array. You only get permission if the array has at least one thing inside. If it's empty, you can't get permission. As such there's a lot of code around checking for permission. Languages like Haskell or Unison have a lot of tricks that make it much easier than you'd think, but you still have to account for it. That's where you see all the weird functions in Haskell like fmap
and >>=
. It's helpers to make it easier to pass around those "permissions".
What's the point you ask? There's all kinds of powerful performance optimizations when you know a certain block of code never touches the outside world. You can split execution between different CPU cores, etc. This is still in it's infancy, but new languages like Unison are breaking incredible ground here. As this is developed further it will be much easier to build software that uses up multiple cores or even multiple machines in distributed swarms without having to build microservice hell. It'll all just be one program, but it runs across as many machines as needed. Monads are just one of the first features that needed to exist to allow these later features.
There's a whole math background to it, but I'm much more a "get things done" engineer than a "show me the original math that inspired this language feature" engineer, so I think if it more practically. Same way I explain functions as a way to group a bunch of related actions, and not as an implementation of a lambda calculus. I think people who start talking about burritos and endofunctors are just hazing.
I don’t know if this is correct, but if it is, this is best answer to this question I’ve ever seen.
Great explanation! Though I prefer to regard monads as semicolon simulators. Monads combine actions separated by semicolons together. The combination can be exceptional, logging, multi-output, or whatever.
That's a good run down of the "why". The thing is, there's way more things that are monads than things that have to be looked at as monads. AFAIK it only comes up directly when you're using something like IO
or State
where the monad functions are irreversible.
From the compiler end, are there optimisations that make use of the monadic structure of, say, a list?
It's just a monoid object in a category of endofunctors, no biggie
Only monad I know is xmonad. My favourite x11 window manager.
Whatever Haskell programmers decide to call a monad today. It's wandered pretty far away from whatever mathematical definition, despite insistences to the contrary.
(Technically, the requirement is to implement a few functions)
A reproductive organ
It's a container with certain behaviors and guarantees making them easy and reliable to manipulate and compose. A practical example is a generic List, that behaves like:
List[1, 2, 3]
, i.e. ("new", "unit", "wrap") to create, containing obj(s)map(func)
to transform objs inside, List[A] -> List[B]first()
, i.e. ("unwrap", "value") to get back the objflat_map(func)
, i.e. ("bind") to un-nest one level when func(a)
itself produces another List, e.g. [3, 4].flat_map(get_divisors) == flatten_once([[1, 3], [1, 2, 4]]) == [1, 3, 1, 2, 4]
Consider the code to do these things using for
loops -- the "business logic" func()
would be embedded and interlaced with flow control.
The same is true of Maybe, a monad to represent something or nothing, i.e. a "list" of at most one, i.e. a way to avoid "null".
Consider how quickly things get messy when there are multiple functions and multiple edge cases like empty lists or "null"s to deal with. In those cases, monads like List and Maybe really help clean things up.
IMO the composability really can't be understated. "Composing" ten for
loops via interlacing and if
checks and nesting sounds like a nightmare, whereas a few LazyList and Maybe monads will be much cleaner.
Also, the distinction monads make with what's "inside" and what's "outside" make it useful to represent and compartmentalize scope and lifetimes, which makes it useful for monads like IO and Async.
It's a burrito
Not Mossad.
Can't spell "functional" without "fun"!
N is for No Surviiiivors, here in the deep blue sea!
Functional programmers still pretending side effects snd reals world applications don’t exist.
As a senior engineer writing Haskell professionally, this just isn't really true. We just push side effects to the boundaries of the system and do as much logic and computation in pure functions.
It's basically just about minimizing external touch points and making your code easier to test and reason about. Which, incidentally, is also good design in non-FP languages. FP programmers are just generally more principled about it.
Reasoning about memory use for example is difficult with FP.
Using pure functions is a good idea for non FP languages as well.
I've never had the chance to use a functional language in my work, but I have tried to use principles like these.
Once I had a particularly badly written Python codebase. It had all kinds of duplicated logic and data all over the place. I was asked to add an algorithm to it. So I just found the point where my algorithm had to go, figured out what input data I needed and what output data I had to return, and then wrote all the algorithm's logic in one clean, side effect-free module. All the complicated processing and logic was performed internally without side effects, and it did not have to interact at all with the larger codebase as a whole. It made understanding what I had to do much easier and relieved the burden of having to know what was going on outside.
These are the things functional languages teach you to do: to define boundaries, and do sane things inside those boundaries. Everything else that's going on outside is someone else's problem.
I'm not saying that functional programming is the only way you can learn something like this, but what made it click for me is understanding how Haskell provides the IO monad, but recommends that you keep that functionality at as high of a level as possible while keeping the lower level internals pure and functional.
I'd love to work on a codebase like that
It heavily depends on the application, right? Haskell is life for algorithmically generating or analysing data, but I'm not really convinced by the ways available in it to do interaction with users or outside systems. It pretty much feels like you're doing imperative code again just in the form of monads, after a while. Which is actually worse from a locality of reference behavior perspective.
This is why I make sure that nothing I code functions in any way at all
functional programmers when they look at their code 2 years later
functionalprogrammers when they look at their code 2 years later
FTFY
Yeah, no side-effects seems like it could only improve readability.
Okay but partial application of curried functions is a really cool way of doing dependency injection and you haven't experienced bliss until you create a perfect module of functions that are exactly that
Also languages with macros and custom operators (where operators are just functions with special syntactic sugar) are so much cooler than those without (Clojure and elixir my beloved)
Additionally a system where illegal states are made impossible is soooo nice to work in. It's like a cheat code
my_balls |> ligma() |> gotem(laugh=TRUE)
Somebody who worked here before tried to do functional in C# by passing delegates into methods instead of injecting interfaces into constructors, across hundreds of repositories. This is why clever people should not be allowed to write code.
Do curried functions come with grated coconut and a lime wedge?
OpenSCAD for life
Very cool but if I want to bevel things it's a nightmare =/
Thankfully never got sucked into that void. I had a coworker who really evangelized functional programming. I wonder what he's up to now.
We have a principal engineer on our team that is pushing this sort of style, hard.
It's essentially obfuscation, no one else on the team can really review, nevermind understand and maintain what they write. It's all just functional abstractions on top of abstractions, every little thing is a function, even property/field access is extracted out to a function instead of just.... Using dot notation like a normal person.
I dabbled in some Haskell a few years ago but quit trying when I got to the hard parts like monads and functors and stuff. All those mathematical concepts were a little too abstract for me.
But what I did bring with me from the experience changed my way of programming forever. Especially function composition and tacit (point-free) style programming. It makes writing code so much faster and simpler and it's easier to read and maintain.
You can utilize some functional programming concepts without being too hardcore with it and get the best of both worlds in the process. 👍