My erstwhile Microsoft colleague and parallelism guru Stephen Toub just did what I did not do last time: he applied the same treatment I gave to the maybe monad and sequence monad to the task comonad. Check out his awesome blog article!

OK, *for reals this time*, that’s it for the monad series. **Next time on FAIC**: ever wonder what the langversion switch on the C# compiler is for? Me neither. But next time you’ll find out anyway.

### Like this:

Like Loading...

*Related*

Thanks to your series I finally feel like I “get” monads, which is something I’ve never managed despite trying to follow numerous other people’s explanations in the past. So thank you for explaining this in a way that makes sense to a C# programmer like myself!

I suspect that my understanding has always been hampered by the fact that going in, I did think I knew one thing about monads, which was that monads are the way that I/O can be done in side-effect free “pure” languages like Haskell. Obviously, after reading and following your series I do understand that that’s not what monads “are”.

But Haskell *does* use monads for I/O, right? What I am missing is how there’s any connection at all between monads as explained in these posts and side-effect free IO. How can the idea of “amplifying” a type to give it extra characteristics translate into being a way to do I/O without side effects?

The key bit is not the amplification of types — though being able to make types like “IO of char” the same way that you’d make “sequence of char” is a part of the mechanism, the amplification is not the important bit for the semantics of IO. The key bit is that the monad represents the composition of operations that have to happen in a particular order; the IO monad is essentially the continuation monad — the “I know what needs to happen next” monad.

The thing about Haskell is that because it is a lazily-evaluated language where results are only computed as needed and then cached, any “result” that has a side effect will be computed in an arbitrary order and possibly only computed once. That’s great if you’re analyzing a string but not so great if you need to write the same bytes to a file twice. One of Haskell’s strengths is that it has tremendous freedom to optimize computations, but that assumes that those optimizations are valid; in a world with side effects you are greatly limited in what optimizations you can perform,

The I/O monad captures the idea that “the results of these computations need to be written to this file in this order, exactly once”, in a way that doesn’t compromise the functional nature of the language.

And thanks for the kind words; I’m glad you enjoyed the series. I enjoyed writing it!

Note to readers: IObservable is the continuation monad in .NET.

http://social.msdn.microsoft.com/Forums/en-US/rx/thread/03f4730f-fe11-4ccf-a799-025fa6e73ac2/

BTW, I enjoyed reading this series too. Though I was really looking forward to your take on category theory – maybe we’ll get zero or more posts on that in the future? (Like IObservable 😉

Yes, it was a great series. I learned a few new things, even though I already understood what a Monad was for the most part. One thing I do find confusing about how Monads are often explained is the unit/bind formulation. For me, at least, the unit/map/flatten formulation is simpler to grasp, but I rarely see Monads explained that way.

I think the monad laws are easier to understand with unit/map/flatten I’m not sure the concept of a monad itself is though YMMV.

Also of course most actual implementations provide unit, map, flatten and bind anyway.

Pingback: Fabulous Adventures In Coding – Monads | Lesser of a Necessary Evil