In this series we’re approaching an understanding of monads “bottom up”, by trying to suss out the pattern, rather than by going “top down”, starting with category theory or functional language practice. Last time I identified five generic types that are frequently used in C#; today we’ll start looking for commonalities beyond their being generic types.

Before we get into that though I want to call out three small ways that I’m going to try to increase the clarity of this exposition:

First, one of the types I mentioned last time was the generic delegate type `Func<T>`

, which represents a `T`

that can be produced on demand. The problem is that as we start exploring the higher-order functional programming nature of monads, I’m going to need to use `Func<A, R>`

to represent functions of one variable, and it’s going to get confusing to also use `Func<T>`

to represent a `T`

that can be produced on demand. I’m therefore going to from now on assume that we have a generic delegate

delegate T OnDemand<T>();

and use that as one of our examples of a monadic type.

Second, I mentioned last time that it is an accident of history that `Nullable<T>`

restricts its type argument to non-nullable value types. I am going to ignore that inconvenient fact for the rest of this series and pretend that `Nullable<T>`

can work on any type.

Third, I’m also going to ignore the fact that the `Lazy<T>`

type has different thread-safety modes you can put it into; deriving a new lazy value from an old one probably should preserve the original value’s thread safety mode. I’m going to ignore that issue.

OK, with that out of the way, let’s dig in. The first thing to notice about the nullable, lazy, on-demand, asynchronous and sequence types is that they are all things that produce a value, more or less. The nullable type might not produce a value, and the sequence type might produce any number of values, but the values are always values of the underlying type. All these types are in some sense a “wrapper” around some number of values of the underlying type. And in fact, we can go farther than that. **If you have a particular value of the underlying type, you can always easily make a wrapper type that produces that value:**

static Nullable<T> CreateSimpleNullable<T>(T item) { return new Nullable<T>(item); } static OnDemand<T> CreateSimpleOnDemand<T>(T item) { return ()=>item; } static IEnumerable<T> CreateSimpleSequence<T>(T item) { yield return item; }

The `Lazy<T>`

and `Task<T>`

simple-value wrapper methods are left as an exercise.

And in fact this is the first requirement of the monad pattern: if `M<T>`

is a monadic type then there’s got to be an easy way to turn any value of type `T`

into a value of type `M<T>`

.

Now, you might think that of course the second requirement is “and you’ve got to be able to get the values back out again too”. As we’ll see, that’s *close* but not quite right. The second requirement is more subtle than that, so we’re going to approach it cautiously. We’ll start with a very specific question: you can add one to an integer, but how do you “add one” to a monadic wrapper around an integer?

As we just spent eight entire episodes discussing, the compiler already knows how to do so to a nullable int. This is already “baked in” to the language; you just add one and the compiler takes care of everything for you. But if it didn’t, we do know how to write the code. I’ll write it much less concisely than normal to illustrate each step carefully:

static Nullable<int> AddOne(Nullable<int> nullable) { if (nullable.HasValue) { int unwrapped = nullable.Value; int result = unwrapped + 1; return CreateSimpleNullable(result); } else return new Nullable<int>(); }

OK, that was easy. Notice that the operation of adding one to a nullable integer is of course another nullable integer, because the original one might have been null. But the operation here was very straightforward: if there is a value we **unwrap it**, then we **perform the addition operation on the unwrapped value**, and then we **wrap the resulting value**. *Is that the general pattern?* Let’s look at another example and see what happens when we follow this pattern:

static OnDemand<int> AddOne(OnDemand<int> onDemand) { int unwrapped = onDemand(); int result = unwrapped + 1; return CreateSimpleOnDemand(result); }

It looks plausible. It compiles. Is it correct? No. If you start with the delegate

()=>DateTime.Now.Seconds

and add one, you do not end up with

()=>DateTime.Now.Seconds + 1

but rather

()=>some fixed value

which seems wrong. The “on-demand-ness” of the left hand side of the addition has disappeared; its value becomes fixed at the moment that the `AddOne`

method is called, and the resulting wrapped value no longer implements the desired semantics. The correct way to add one to an on-demand integer is to ensure that the new on-demand integer demands the original one itself:

static OnDemand<int> AddOne(OnDemand<int> onDemand) { return ()=> { int unwrapped = onDemand(); int result = unwrapped + 1; return result; }; }

(Again, I’m showing all the steps explicitly; realistically we’d write this code with far more concision.)

This preserves the desired semantics; now the original on-demand value is demanded every time the newly-demanded value is requested.

Now, in the correct code you’ll notice that we *did *use the “unwrap and do the operation” mechanism. The difference between the nullable int operation and the on-demand int operation lies in *how the resulting wrapped type is created*. With the nullable monad we can get away with aggressively unwrapping the value, doing the computation, and producing a simple wrapper around the new value. The on-demand monad has to be a lot more clever about how the new wrapper is created; it is not a simple wrapper around a value. Rather, it **produces an object whose structure encodes the sequence of operations that are going to happen on demand[1. This ability is one of the key feature that makes monads useful; we'll return to this point later on in the series.]**. In this case, it produces a delegate that invokes the original delegate.

Let’s quickly go through the “add one” operations on our other integer wrapper examples. I’ll continue to be unnecessarily verbose.

static Lazy<int> AddOne(Lazy<int> lazy) { return new Lazy<int>(()=> { int unwrapped = lazy.Value; int result = unwrapped + 1; return result; }); }

Unsurprisingly, adding one to a lazy integer looks a lot like a combination of adding one to a nullable integer and adding one to an on-demand integer. Note that the original lazy value is not computed until the new lazy value is computed; the laziness is preserved.

async static Task<int> AddOne(Task<int> task) { int unwrapped = await task; int result = unwrapped + 1; return result; }

Well that was easy! This looks even easier than the nullable example we started with, but that’s because we are taking advantage of the C# 5 compiler’s ability to generate a whole lot of code on your behalf. As an exercise, try writing the `AddOne`

operation on asynchronously-computed integers in C# 4.

static IEnumerable<int> AddOne(IEnumerable<int> sequence) { foreach(int unwrapped in sequence) { int result = unwrapped + 1; yield return result; } }

Again, we are taking advantage of the C# compiler’s ability to write lots of code on your behalf. It is instructive to write out this code without using `foreach`

or `yield`

. [2. And of course this code code be even more concise; this could be written `return from item in sequence select item + 1;`

.]

All right, what have we learned so far?

- The first rule of the monad pattern is that there’s always an easy way to go from a value of the “underlying” type to a value of the “wrapped” type.
- The second rule of the monad pattern is that adding one to a wrapped int somehow produces another wrapped int, and does so in a way that preserves the desired “amplification”.

That second rule looks like it could use some work. **Next time on FAIC** we’ll start to generalize from our example of adding one to a “wrapped integer” as we continue to refine the second rule of the monad pattern.

Excellent post as always, Eric. I’ve always used Monads in a more intuitive way (i.e. they’re just a common pattern), but never really understood them coming from the functional perspective. Your bottom-up approach is starting to really clear them up for me conceptually and I think I can see where this is going, now.

He’s not going to write a Graph/Sudoku solver again, is he?

Yes. Brilliant. :)

The post actually had me thinking: “What awesome new C# 6 feature could he be building up to with this?” Maybe we’ll see a new ‘bind’ keyword announced on April 1. :P

Unfortunately he no longer works for Microsoft.. :(

Not working for Microsoft any longer really wouldn’t get in the way of an April 1st announcement, would it?

Great post as always. On a side note, what happened with you FAICasting? Any progress?

I made considerable progress at the end of last year when I had some time off; my last eight castings worked well. But I’ve gotten busy with a new job, and I can’t melt aluminum in the rain. I’ll pick it up again in the spring.

Cool, looking forward to it! Your blog inspired me to start learning about casting myself, it’s something I’ve wanted to try for a while.

Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1306

Would it be safe to use decorators as a metaphor for monads, or would that lead to terrible misconceptions further down the line? The fit seems quite good so far!

good metaphor although Monads is more abstract than decorator. It provides you a way to compose any number of functions to any scale.

Looks like debugging these things could be a real pain, not?

Don’t see why – you can set breakpoints in lambas, and all the normal VS debugging tools will work correctly in that context (in my experience anyway).

It depends. On the one hand, debugging asynchronous workflows can be tricky. You don’t have the “call stack” crutch to lean on because it is no longer the case that *where you came from* is logically connected to *where you’re going next*. And it is hard to look at a bunch of tasks and understand how they are all related.

On the other hand, objects that represent workflows can also be easier to debug with a little work up front. LINQ queries are essentially monadic workflows and there are debugging tools to help you, say, see what SQL is going to be spit out the other end.

Having recently done up a multi-threaded XML serializer/deserializer that made very heavy use of monads. Yes it can be a real pain.

For example:

for (i = 0; i { DoStuff(i); };

}

And:

for (i = 0; i { DoStuff(j); };

}

With another thread (or in my case many other threads) calling someDelegate(). You can even stick a break point in the lamda and still not see what’s going on. But the second one produces the expected results, the first is very unpredictable especially if those other threads are not busy and someDelegate is going into a ConcurrentQueue.

Bah it mangled the loop declaration. Just picture a loop with a lot of iterations, the second one with “int j = i” before a lamda declaration.

Although I already kind of understood monads this explanation has sharpened my understanding of them. Even though I learnt no additional formal knowledge I can now make more sense of monads.

I feel the formal way of explaining them is very useful and concise … for people who already understand them! All others get lot in the details and in the formulae.

Sounds a lot like the story presented here: http://www.cs.uni.edu/~wallingf/blog/archives/monthly/2013-02.html#e2013-02-12T14_53_00.htm

“concision” What a wonderful word. Great series already! Looking forward to the rest of the journey – I’ve read many articles on the “what are they” of monads but never really got to the “why do I care” level of understanding.

I confess I never had much trouble understanding monads, although I’ve always found the “explanations for OO programmers” to be confusingly verbose (although given the comments on these posts, that makes me an outlier!).

In the past I have always explained monads like this:

Imagine you have a sequence of processing steps you want carried out in order, each step being a function whose output is the input to the next step (i.e., step -> step -> step -> …).

Now imagine you want to extend this idea in two ways:

- you want to add some meta-data (e.g., you want to also count how many steps have taken place);

- at any point you want to be able to skip the rest of the steps (or possibly take different successive steps) depending on the meta-data.

Moreover, you’d like to do this without having to make changes to your “step” functions.

At this point you might come up with the idea of a “glue” function to join your steps together. Now your processing function looks something like this step -> glue -> (step -> glue -> (…)). The “glue” has the job of updating the meta-data (e.g., incrementing the “number of steps” counter) and/or making a decision (e.g., if the number of steps has reached some threshold, skip the remaining steps).

Some glue functions are very simple (e.g., for the Nullable type). Others may be more complex (e.g., for parsing or tree searching or for pure IO or …).

Now, if you call your set-up function (which contains the initial input and meta-data) “return” and your glue function “bind”, then you pretty much have a monad.

Because the “return” and “bind” functions all have the same “shape”, you can construct families of operations over monads which do useful things for ALL monads, all without having to change your original “step” functions.

Pingback: Weekly Digest 3 | chodounsky

“Unsurprisingly, adding one to a lazy integer looks a lot like a combination of adding one to a nullable integer and adding one to an on-demand integer. Note that the original lazy value is not computed until the new lazy value is computed; the laziness is preserved.”

Does the pattern indicate whether there should be side effects? (Coming from functional programming, it seems as if it should not). E.g., wrapping a Lazy, given that the monadic pattern as you indicated should encode the *sequence of operations*, it seems as if the operations for initializing the original Lazy shouldn’t initialize the original Lazy, but in fact should be encoded in the new Lazy’s initialization and the original should be unaffected.

The intersection of monads, side effects and functional languages is precisely what motivates their use in those languages: monads are used in purely-functional languages to encode the notion of “these side effects must occur in this order”, even though such languages discourage side effects and often do out-of-order evaluation.

(Disclaimer: I’m not familiar with C#, I am familiar with Haskell, and I am unfamiliar with and eschew Category theory.)

It appears to me that all of the AddOne examples are actually demonstrating a Functor instance for each type, not a Monad instance.

The property is described as: “we unwrap it, then we perform the addition operation on the unwrapped value, and then we wrap the resulting value.”

If we replace “addition operation” with “an operation”, then whether or not we are describing a Functor or Monad depends on whether that operation knows about the wrapper or not.

In the case of adding one (where we think of the + operator as a two argument function), it’s arguments and return do not involve the wrapping type, so this is a Functor property.

If the operation *returns the wrapping type* then it is a Monad.

I hope this clarification helps. IMO, it’s helpful to introduce Functors first (and all of these examples actually do so!), and then proceed to Monads. (There is also a common middle step called Applicative.)

That is precisely what I am attempting to do — introduce functors and then procede to monads — but I am attempting to avoid saying “functor” any more than necessary because I’m trying to keep this friendly to developers who don’t have a background in functional programming, algebra, etc. Thanks for the note!

I’m doing some code testing and I noticed the method

async static Task AddOne(Task task)

{

int unwrapped = await task;

int result = unwrapped + 1;

return result;

}

will actually work only if the task parameter is started, otherwise it should be started before the await line, right?

Good point; the question of whether tasks are “hot” — started as soon as they are created — or “cold” — started explicitly — was a long debate when we were designing async/await. I’m glossing over those kinds of details in this series.

I’m sure I must be doing something stupid, but I can’t figure out what. The “CreateSimpleNullable” function refuses to compile for me, on both VS2010 and VS2012. The section “return new Nullable(item);” gives an error on the which reads “The type ‘T’ must be a non-nullable value type in order to use it as parameter ‘T’ in the generic type or method ‘System.Nullable’

Gaah. Didn’t realise the comments would strip of angle brackets and any contents. The two incorrect bits should read:

return new Nullable<T>(item)

gives an error on the <T> which reads

Nullable (of T) has a constraint on T that says that it must be a “struct” (i.e. a value type). Somewhere in the introduction to all this, Eric pointed out that he was going to ignore that constraint in his discussions. If you want to get things to compile, you will need to constrain T properly.