Monads, part eleven

I had intended to spend this episode on the history of the relationship between SelectMany and the LINQ query expression syntax, but before I do that I want to briefly address a point raised in the comments of the previous episode.

We know that the key bits of the monad pattern are (1) the ability to construct a monad value that “wraps” an underlying value, and (2) the ability to apply a function from A to M<R> to a value of type M<A>. The first operation is traditionally called unit. The second operation is traditionally called bind, but is called SelectMany in LINQ.

We saw how you could build Select out of only SelectMany and a helper method that effectively just calls unit — and indeed, you can build a Select for any monad because all you need is the unit and bind; this operation is traditionally called fmap.

I also showed how you could build Where — an inefficient Where to be sure — out of SelectMany and a helper method, but reader “Medo” reasonably pointed out that there is no general monadic where because my implementation actually required three operations: unit, bind and make an empty sequence.

This criticism is entirely justified; though every monad has an analog of the sequence monad’s Select and SelectMany, not every monad has an analog of Where.

Continue reading

Monads, part ten

Last time on FAIC I gave some examples of some simple “associate extra data with a value” monads. Though those are useful, the real power of the monad pattern is when it is used to produce objects that represent workflows on data. The best example of that I can give you in C# is the sequence monad, IEnumerable<T>. The whole point of LINQ, in fact, is to make it easier to construct the monadic workflows that are better known as “queries”.

Continue reading

Monads, part nine

Last time in this series I discussed the standard terminology used for the monad pattern: that the simple construction method is traditionally called “unit”, and the function application method is traditionally called “bind”. I also pointed out that the two sub-patterns you see most frequently in implementations of the monad pattern are first, association of some state with a value, and second, construction of workflows that describe the sequencing of units of work to be performed in the future. Today we’ll look at that first kind of monad in some more detail.

Continue reading

Keeping it sharp

Though we’ve exchanged a few emails over the years, I’ve only met Joel Spolsky once, back in 2005, and since he was surrounded by a protective layer of adoring fanboys we didn’t get to chat much. So it was a genuine pleasure to finally spend the better part of an hour chatting with Joel, David, Jay and Alex – them in New York, me in Seattle, Skype is a wonderful thing. If you like long, rambling conversations full of obscure facts about old programming languages, you could do worse than this podcast. The link to the podcast post is here.

A few supplemental links for some of the topics we cover:

Continue reading

Monads, part eight

Last time on FAIC we managed to finally state the rules of the monad pattern; of course, we’ve known for some time that the key parts of the pattern are the constructor helper, which we’ve been calling CreateSimpleM<T> and the function application helper, which we’ve been calling ApplySpecialFunction<A, R>. Needless to say, these are not the standard names for these functions. [1. If they were, then I’d have been able to learn what monads were a whole lot faster. Hopefully this helped you too.]

Continue reading

Monads, part seven

Srinivasa Ramanujan - OPC - 2Way back in 1992 I was studying linear algebra at Waterloo. I just could not seem to wrap my head around dual spaces. Then one night I went to sleep after studying algebra for several hours, and I dreamed about dual spaces. When I awoke I had a clear and intuitive understanding of the concept. Apparently my brain had decided to sort it all out in my sleep. It was a bizarre experience that never happened again.[1. Unfortunately I no longer have an intuitive understanding of dual spaces, having not used anything more than the most basic linear algebra for two decades. I’m sure I could pick it up again if I needed to, but I suspect that the feeling of sudden clarity is not going to be regained.] History is full of examples of people who had sudden insights that solved tricky problems. The tragically short-lived mathematician Srinivasa Ramanujan claimed that he dreamed of vast scrolls of mathematics, most of which turned out to be both correct and strikingly original.

There is of course a difficulty with waiting for a solution to appear in a dream: you never know when that’s going to happen. Since insight is unreliable, we’ve developed a far more reliable technique for solving tough problems: recursive divide and conquer. We solve problems the same way that a recursive method solves problems:

Continue reading

Monads, part six

Last time in this series we finally worked out the actual rules for the monad pattern. The pattern in C# is that a monad is a generic type M<T> that “amplifies” the power of a type T. There is always a way to construct an M<T> from a value of T, which we characterized as the existence of a helper method:

static M<T> CreateSimpleM<T>(T t)

And if you have a function that takes any type A and produces an M<R> then there is a way to apply that function to an instance of M<A> in a way that still produces an M<R>. We characterized this as the existence of a helper method:

static M<R> ApplySpecialFunction<A, R>(
  M<A> wrapped,
  Func<A, M<R>> function)

Is that it? Not quite. In order to actually be a valid implementation of the monad pattern, these two helper methods need to have a few additional restrictions placed on them, to ensure that they are well-behaved. Specifically: the construction helper function can be thought of as “wrapping up” a value, and the application helper function knows how to “unwrap” a value; it seems reasonable that we require that wrapping and unwrapping operations preserve the value.

Continue reading

Whidbey Island and bagel mathematics (rerun)

Today another in my fun-for-a-Friday series of reruns from the classic days of FAIC. This one was originally published in December of 2004. Enjoy!


299px-Bagels-Montreal-REALI was highly amused to read on Raymond Chen’s blog the other day that mathematicians are hard at work solving the problem of how to most evenly distribute poppyseeds over a bagel. The reason I was highly amused was not just the whimsical description of what is actually a quite practical and difficult problem.

And yes, believe it or not, it is a practical problem; if you can figure out how to distribute points evenly around an arbitrary shape then you can use that information to develop more efficient computer algorithms to solve complex calculus problems that come up in physics all the time. There are also applications in computer graphics, I’d imagine; 3-D modeling frequently requires generating well-behaved finite approximations of a surface.

But I digress. The other reason I was highly amused is that Whidbey Island is the longest island in the United States.

Continue reading

Monads, part five

We are closing in on the actual requirements of the “monad pattern”. So far we’ve seen that for a monadic type M<T>, there must be a simple way to “wrap up” any value of T into an M<T>. And last time we saw that any function that takes an A and returns an R can be applied to an M<A> to produce an M<R> such that both the action of the function and the “amplification” of the monad are preserved, which is pretty cool. It looks like we’re done; what more could you possibly want?

Well, let me throw a spanner into the works here and we’ll see what grinds to a halt. I said that you can take any one-parameter function that has any non-void return type whatsoever, and apply that function to a monad to produce an M<R> for the return type. Any return type whatsoever, eh? OK then. Suppose we have this function of one parameter:

(Again, for expository purposes I am writing the code far less concisely than I normally would, and of course we are ignoring the fact that double already has a “null” value, NaN.)

static Nullable<double> SafeLog(int x)
{
  if (x > 0)
    return new Nullable<double>(Math.Log(x));
  else
    return new Nullable<double>();
}

Seems like a pretty reasonable function of one parameter. This means that we should be able to apply that function to a Nullable<int> and get back out…

oh dear.

Continue reading

Monads, part four

So far we’ve seen that if you have a type that follows the monad pattern, you can always create a “wrapped” value from any value of the “underlying” type. We also showed how five different monadic types enable you to add one to a wrapped integer, and thereby produce a new wrapped integer that preserves the desired “amplification” — nullability, laziness, and so on. Let’s march forth (HA HA HA!) and see if we can generalize the pattern to operations other than adding one to an integer.

Continue reading