Monads, part two

Last time on FAIC I set out to explore monads from an object-oriented programmer’s perspective, rather than delving into the functional programmer’s perspective immediately. The “monad pattern” is a design pattern for types, and a “monad” is a type that uses that pattern. Rather than describing the pattern itself, let’s start by listing some monad-ish types that you are almost certainly very familiar with, and see what they have in common.

These five types are the ones that immediately come to my mind; I am probably missing some. If you have an example of a commonly-used C# type that is monadic in nature, please leave a comment.

  • Nullable<T> — represents a T that could be null
  • Func<T> — represents a T that can be computed on demand
  • Lazy<T> — represents a T that can be computed on demand once, then cached
  • Task<T> — represents a T that is being computed asynchronously and will be available in the future, if it isn’t already
  • IEnumerable<T> — represents an ordered, read-only sequence of zero or more Ts

So, what do these types have in common? The most obvious thing is that they are generic types with exactly one type parameter. Moreover, these types are embarrassingly generic. With the exception of Nullable<T>, all of these type work equally well with any T whatsoever; they are totally “agnostic” as to the semantics of their underlying type. And even Nullable<T> is only restricted to non-nullable value types.

Aside: This is essentially an accident of history. In a counterfactual world where the CLR had generic types from the get-go, it seems plausible that Nullable<T> could have been implemented to work on any type. We could have a type system where Nullable<string> was the only legal way to represent “a string that can be null”. Keep this in mind the next time you design a new type system!

Another way to look at these generic types is that they are “amplifiers”. (I am indebted to my erstwhile colleague Wes Dyer for this interpretation of monads; his article on monads was a crucial step in my understanding of the concept.) An “amplifier” is something that increases the representational power of their “underlying” type.

A byte can be one of 256 values; that’s very useful but also very simple. By using generic types we can represent “an asynchronously-computed sequence of nullable bytes” very easily: Task<IEnumerable<Nullable<byte>>>. That adds a huge amount of power to the “byte” type without changing its fundamental “byte-ish” nature.

So is a monad simply an embarrassingly generic type of one parameter that conceptually “adds power” to its underlying type? Not quite; there are a couple more things we need in order to have an implementation of the “monad pattern”. Next time on FAIC we’ll try to perform some operations on these five types and see if we can suss out any other commonality.

43 thoughts on “Monads, part two

  1. I don’t think it’s as easy to imagine a completely-generic Nullable as you imply in note 3. Even in a world where reference types are non-nullable by default, Nullable might not work on itself – that is, Nullable<Nullable<int>> would be invalid.

    That said, Haskell has “Maybe Maybe Int”, so it’s not completely out of the question.

    • That’s because Nullable by itself isn’t a type. It’s a type constructor, you have to give it a parameter for T before it’s a type. Using the Haskell example you’d have Nullable<Nullable>

      You could go with something similar to Collections.Generic.List and Collections.List, where Nullable (No T) would be the same as Nullable.

    • Interestingly enough in the original concept of nullable types a nested nullable type was legal, and there is still gear in the compiler to deal with the situation. Also, the original name of the type was going to be “Optional”, not “Nullable”.

      In this series I’ll be making the simplifying assumption that nested nullable types are legal, and covering what to do with nested monadic types in a few episodes from now.

  2. Having the reference types be non-nullable by default would have been so awesome. “string?” is not that difficult to write. Sigh, hindsight.

    (although there are probably some non-obvious complexities regarding non-nullable reference types… one that comes to mind is fields without explicit initialization – maybe this would have to be illegal unless the type has a parameterless constructor)

    • Indeed, attempts to bolt on non-nullable reference types post hoc have run into problems with initialization. You want a type system to document invariants, and it is difficult to have the invariant that a field of reference type is *never* observed to be null. For example:

      class C { 
        S! s1; 
        S! s2; 
        public C(S! x, S! y) { this.s1 = x; this.s2 = y; } 
        ~C() { Console.WriteLine(s2.ToString()); } }

      The question is: can the destructor throw? Sure. If there is a thread abort exception between this.s1 = x; and this.s2 = y; then the dtor observes the uninitialized state of s2.

      • C++ manages to handle that case by just not running the destructors of partially-constructed objects, but obviously that’s only an option with RAII. The obvious ugly solution would be to just allow non-null reference types to be null in the constructor and destructor. To some extent the rarity of useful applications of destructors in C# makes this less ugly… but OTOH that also means that programmers could get away with not knowing that references can be null in destructors for a very long time and write a lot of subtly wrong code.

        I would love to have non-nullable reference types even with a bunch of subtle issues, though. The subtle issues almost certainly wouldn’t be as annoying to deal with as null is.

        • @Thomas Goyne,
          The lifetime of an object in C++ starts when the initialization is completed ( ISO C++ Standard – N3376 []).
          In C++, an object can be called “object” just as it begins its lifetime.
          There is no object if the initialization is incomplete.
          Then, in the absence of the object, there is nothing to destroy, so the destructor is not called.

          What do you mean by “but obviously that’s only an option with RAII.”?
          The object lifetime notion on C++ is independent of RAII.

          In C#, as Ecma-334, I can’t find a definition of “object”. With fear of being wrong I think is poorly specified.

          • The C# specification does not seek to be either an academic paper or a tutorial; it assumes that the reader has a working knowledge of common terms. You’ll notice that “type” is nowhere defined in the specification either.

          • @Eric:
            My apologies, I misspoke.
            I didn’t mean that the “object” definition as OOP should be included in the Standard.
            I meant that the Standard does not specify the meaning of “object” for C#, or rather, what is the lifetime of an object.
            When a piece of raw memory is considered an “object” (something that satisfies the invariant) and when an object is again raw memory.

          • The lifetime of an object is an implementation detail of the garbage collector and therefore not in the C# specification. The specification however does have rather a lot to say on the subject of when a local variable is a root of the garbage collector, which obviously impacts lifetime.

          • @Eric: thanks for your answer
            What about the beginning of the object’s lifetime?
            Is it defined? I think it should be, regardless of the implementation.

          • In C# the lifetime of an object begins the moment the garbage collector allocates it. This has some interesting implications. For example, suppose you have a ctor that fills in two fields. If the thread is aborted between the assignments then we have a living, orphaned object with half its fields filled in. This might come as a surprise to the destructor! A destructor must be written to assume that *nothing* succeeded in the ctor.

            This differs from C++, where, as you note, an object is never destructed if it was never constructed fully in the first place.

          • Thanks Eric, very explanatory.
            It is exactly what I wanted to know.
            Accustomed to C++, I like to see this kind of definition in standards. I hope in the future the C# Standard is updated.

    • A comonad is the dual of a monad; this is analgous to how a covector is the dual of a vector in graphics (think how a plane is the dual of a line!). Monads generate effects, and comonads evaluate effects.

    • We are rapidly getting out of my depth here, but briefly, a comonad is like a monad that “goes backwards”. As we’ll see over the next few episodes, what characterizes a monad is that (1) you can take any value of type V and make a monadic value of type M<V>, and (2) you can take a function from V to M<W> and turn it into a function from M<V> to M<W>. What characterizes a comonad is that (3) you can run (1) backwards: you can take any value of type M<V> and extract a value of type V, and (4), you can run (2) backwards: you can take a function from M<V> to W and turn it into a function from M<V> to M<W>. Because you can do all four with Task<T>, it is both a monad and a comonad. (Do you see how to do all four things with Task<T>? If not, we’ll cover (1) and (2) in the next few episodes.)

    • I think the best way to understand comonads is to think of them as structures that store a value together with some context. The “counit” operation “C<T> -> T” gives you a way to extract the current value and the “cobind” operation “(C<T> -> R) -> C<T> -> C<R>” gives you a way to propagate the context: given a computation that can turn T in context into a value R, we can build a computation that takes T in context, calculates R and wraps it into the same context.

      Together with a colleague, we’ve been working on things related to comonads in programming languages recently, so here are some resources that you might find interesting (though they are quite theoretical):


      Strictly speaking, there are not that many interesting comonad. A comonad can store T together with some state. It can also keep a non-empty list of T values (it has to be non-empty, because you need to be always able to extract the value!)

      Treating Task<T> as a comonad is interesting, but I think it might actually be a bit misleading. The problem is that the Value property (takes Task<T> and gives you T) is not actually a _pure_ computation – it does not always return the value immediately, but sometimes blocks. This means that if you treat Task<T> as a comonad, you have to ignore blocking (and all timing of the computations). Since blocking is a key aspect of tasks, this feels a bit like cheating…

      • Thanks for the links, I’ll check them out.

        Another interesting aspect of tasks is that of course they need not return a value at all; they can throw an exception when asked for their value if the task failed or was cancelled. Of course, it is also the case that a non-void-returning method can throw.

        I agree that your concern is valid, but I say that if Erik Meijer is comfortable calling them comonads then I am too. 🙂

        • The fact that Task is a comonad can be proved using “proof by eminent authority” 🙂 (see

          More seriously – I think Eric’s point was that many monads are not, strictly speaking, monads, because they do not obey the monad laws if we take into account non-termination (and that is perhaps similar to ignoring exceptions or cancellation when talking about Task as a comonad). This is definitely an interesting perspective and I think some people see lazy evaluation as another comonadic property, which might be related…

          But if you ignore exceptions and cancellation, then the “ContinueWith” method is really more like “Select” (with the only difference that it gives you a task that is always completed – and thus has a value – rather than directly the value).

          This also means that if you have a structure with “map”, “return” and the dual of return (“coreturn: C<T> -> T”) then you can always define “cobind” (just by using “map” and “return”) and you get something that looks like a comonad, but it is a question whether the “cobind” operation (representing context propagation in our work) can give you something useful when it is derived from other primitives. The “counit” operation certainly adds value, so there is something interesting there…

  3. If one is allowed to create arrays of arbitrary types, then reference types pretty much have to be nullable, since there is no sensible default value for them other than null. On the other hand, if one were allowed to define value types which had custom conversions to/from `Object`, one could create value-type wrappers for immutable reference types which would behave as non-nullable versions of those types. Such a design might have been good for “String”, since it would have allowed the default value of strings to behave consistently as an empty string, as used to be the case in COM.

    Also, I for one am in the camp that thinks that either “Nullable(Of T)” should not have constrained T, or else there should have been a type which behaved somewhat similarly but without the constraint on T, and the unusual boxing behavior (unboxing behavior could be as for “Nullable(Of T)”) replaced by a special AsBoxed property. I don’t see any semantic difficulty with nested nullables; if I have a “Dictionary(Of String, Nullable(int))” and a TryGetValue method that returns a “Nullable(Nullable(int))”, then if the return value of that method reports false for “HasValue” it means that there was no entry for the requested string; if it returns “True”, but “Value.HasValue” returns false, that means the string is associated with a null value. Nothing complicated.

      • A 1-tuple is. A 2-tuple (partially applied) is the Writer monad, collecting state as you move along the computation. You also need some way to combine the other component in the tuple, though. In Haskell this is done with a Monoid contraint, providing you with an ’empty’ element (for return) and a binary operation to combine two elements (for bind).

  4. My favourite .NET monad is the Reactive Framework `IObservable`. This is a very powerful library and I use it all the time.

  5. The problem with dealing “I don’t know what this value is exactly, but it’s there” with NULL is that, well, it doesn’t work. If you make NULL inequal to itself you lose the ability to express “I don’t know exact values of this two things, but I know that they’re equal”. Besides, a value which isn’t equal to itself breaks “=” so hard, it hurts my mathematician’s brain. And if you make NULL equal to itself, lo and behold, all unknown values are equal to each other.

    One way to resolve this is to use three buckets: “Things with values I know”, “Things with values I don’t know (but those values are there)”, “Things without the values”. Then, of course, the fourth bucket “Things about I don’t know even whether they have those values I’m interested in” appears, but at least you’re more or less prepared to it. And to sort all this with one additional NULL marker? It’s plain impossible. When you try to model not only the world, but also the state of your knowledge about the world, you have to distinguish those to things clearly.

    Okay, that’s got really off-topic. As for another monad, ParseNext<T> which tries to extract a value of T from a stream, then moves forward. Hey, that’s a full-blown class with an inner state!

  6. Great post, as usual.

    Monads are so elegant, working with them is always very intuitive. I feel that they are often the bread and butter of a well designed API.

  7. Eric, big fan of your blog.

    Just out of curiosity are there blogs that *you* read the same way we read your blog?

    Also do you have any recommendations for blogs similar to yours (dealing with compilers and language constructs and languages in general)

    Thanks for you help and can’t wait for the next post !!

  8. I’ve read many many articles on monads and never been able to get all the way to the end, but so far this is making perfect sense! Would it be safe to describe monads as being similar to decorators in the decorator pattern? Is one a subset of the other or do they just have some similarities?

  9. Pingback: Weekly Digest 3 | chodounsky

  10. I’m sure you know this, so I’m just clarifying one your statement.
    “IEnumerable — represents an ordered, read-only sequence of zero or more Ts”
    It’s not obvious, but nowhere it is said that IEnumerable implementations should guarantee order so it should not be relied upon. Parallel LINQ is one example where you cannot rely on order of IEnumerable.

    • IEnumerable is ordered in the sense that there is an implicit order given from the fact that you can say “first item returned, second item returned, …”. It is not ordered in the sense that you can re-run the enumerator and expect the same order. It isn’t even a guarantee that you’ll get the same set of values whatsoever, in any order.

      I agree, that’s a very weak sense of order, and I prefer the notion of IOrderedEnumerable, to provide such guarantees. I’d also love IFiniteEnumerable, so that people don’t attempt to materialize a non-terminating sequence.

  11. Heya my business is with the major moment below. I ran across this particular mother board so i realize its actually useful & it helped me to out there a good deal. I am hoping to offer a very important factor again plus aid other folks like you helped me to.

  12. Pingback: Monads, part three | Fabulous adventures in coding

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s