My erstwhile Microsoft colleague and parallelism guru Stephen Toub just did what I did not do last time: he applied the same treatment I gave to the maybe monad and sequence monad to the task comonad. Check out his awesome blog article!
OK, for reals this time, that’s it for the monad series. Next time on FAIC: ever wonder what the langversion switch on the C# compiler is for? Me neither. But next time you’ll find out anyway.
Holy goodness, this series got a lot longer than I expected. I was hoping to get to some actual category theory, but let’s just talk a bit about LINQ and then wrap up. Maybe I’ll do a series on the basics of category theory in the future.
I’ve said several times that
SelectMany is the
bind operation on the sequence monad, but of course there are several overloads of the
SelectMany extension method of
IEnumerable<T>. The one that is
bind is as we previously discussed: (Error checking removed for clarity.)
public static IEnumerable<R> SelectMany<A, R>(
this IEnumerable<A> items,
Func<A, IEnumerable<R>> function)
foreach(A outer in items)
foreach(R inner in function(outer))
yield return inner;
Now you might think that when you say
from outer in items
from inner in function(outer)
this translates directly into a call to
This is not what actually happens.
I had intended to spend this episode on the history of the relationship between
SelectMany and the LINQ query expression syntax, but before I do that I want to briefly address a point raised in the comments of the previous episode.
We know that the key bits of the monad pattern are (1) the ability to construct a monad value that “wraps” an underlying value, and (2) the ability to apply a function from
M<R> to a value of type
M<A>. The first operation is traditionally called
unit. The second operation is traditionally called
bind, but is called
SelectMany in LINQ.
We saw how you could build
Select out of only
SelectMany and a helper method that effectively just calls
unit — and indeed, you can build a
Select for any monad because all you need is the
bind; this operation is traditionally called
I also showed how you could build
Where — an inefficient
Where to be sure — out of
SelectMany and a helper method, but reader “Medo” reasonably pointed out that there is no general monadic
where because my implementation actually required three operations:
bind and make an empty sequence.
This criticism is entirely justified; though every monad has an analog of the sequence monad’s
SelectMany, not every monad has an analog of
Last time on FAIC I gave some examples of some simple “associate extra data with a value” monads. Though those are useful, the real power of the monad pattern is when it is used to produce objects that represent workflows on data. The best example of that I can give you in C# is the sequence monad,
IEnumerable<T>. The whole point of LINQ, in fact, is to make it easier to construct the monadic workflows that are better known as “queries”.
Last time in this series I discussed the standard terminology used for the monad pattern: that the simple construction method is traditionally called “unit”, and the function application method is traditionally called “bind”. I also pointed out that the two sub-patterns you see most frequently in implementations of the monad pattern are first, association of some state with a value, and second, construction of workflows that describe the sequencing of units of work to be performed in the future. Today we’ll look at that first kind of monad in some more detail.
Last time on FAIC we managed to finally state the rules of the monad pattern; of course, we’ve known for some time that the key parts of the pattern are the constructor helper, which we’ve been calling
CreateSimpleM<T> and the function application helper, which we’ve been calling
ApplySpecialFunction<A, R>. Needless to say, these are not the standard names for these functions. [1. If they were, then I’d have been able to learn what monads were a whole lot faster. Hopefully this helped you too.]
Way back in 1992 I was studying linear algebra at Waterloo. I just could not seem to wrap my head around dual spaces. Then one night I went to sleep after studying algebra for several hours, and I dreamed about dual spaces. When I awoke I had a clear and intuitive understanding of the concept. Apparently my brain had decided to sort it all out in my sleep. It was a bizarre experience that never happened again.[1. Unfortunately I no longer have an intuitive understanding of dual spaces, having not used anything more than the most basic linear algebra for two decades. I’m sure I could pick it up again if I needed to, but I suspect that the feeling of sudden clarity is not going to be regained.] History is full of examples of people who had sudden insights that solved tricky problems. The tragically short-lived mathematician Srinivasa Ramanujan claimed that he dreamed of vast scrolls of mathematics, most of which turned out to be both correct and strikingly original.
There is of course a difficulty with waiting for a solution to appear in a dream: you never know when that’s going to happen. Since insight is unreliable, we’ve developed a far more reliable technique for solving tough problems: recursive divide and conquer. We solve problems the same way that a recursive method solves problems: