Unknown's avatar

About ericlippert

http://ericlippert.com

Keeping it sharp

Though we’ve exchanged a few emails over the years, I’ve only met Joel Spolsky once, back in 2005, and since he was surrounded by a protective layer of adoring fanboys we didn’t get to chat much. So it was a genuine pleasure to finally spend the better part of an hour chatting with Joel, David, Jay and Alex – them in New York, me in Seattle, Skype is a wonderful thing. If you like long, rambling conversations full of obscure facts about old programming languages, you could do worse than this podcast. The link to the podcast post is here.

A few supplemental links for some of the topics we cover:

Continue reading

Monads, part eight

Last time on FAIC we managed to finally state the rules of the monad pattern; of course, we’ve known for some time that the key parts of the pattern are the constructor helper, which we’ve been calling CreateSimpleM<T> and the function application helper, which we’ve been calling ApplySpecialFunction<A, R>. Needless to say, these are not the standard names for these functions. [1. If they were, then I’d have been able to learn what monads were a whole lot faster. Hopefully this helped you too.]

Continue reading

Monads, part seven

Srinivasa Ramanujan - OPC - 2Way back in 1992 I was studying linear algebra at Waterloo. I just could not seem to wrap my head around dual spaces. Then one night I went to sleep after studying algebra for several hours, and I dreamed about dual spaces. When I awoke I had a clear and intuitive understanding of the concept. Apparently my brain had decided to sort it all out in my sleep. It was a bizarre experience that never happened again.[1. Unfortunately I no longer have an intuitive understanding of dual spaces, having not used anything more than the most basic linear algebra for two decades. I’m sure I could pick it up again if I needed to, but I suspect that the feeling of sudden clarity is not going to be regained.] History is full of examples of people who had sudden insights that solved tricky problems. The tragically short-lived mathematician Srinivasa Ramanujan claimed that he dreamed of vast scrolls of mathematics, most of which turned out to be both correct and strikingly original.

There is of course a difficulty with waiting for a solution to appear in a dream: you never know when that’s going to happen. Since insight is unreliable, we’ve developed a far more reliable technique for solving tough problems: recursive divide and conquer. We solve problems the same way that a recursive method solves problems:

Continue reading

Monads, part six

Last time in this series we finally worked out the actual rules for the monad pattern. The pattern in C# is that a monad is a generic type M<T> that “amplifies” the power of a type T. There is always a way to construct an M<T> from a value of T, which we characterized as the existence of a helper method:

static M<T> CreateSimpleM<T>(T t)

And if you have a function that takes any type A and produces an M<R> then there is a way to apply that function to an instance of M<A> in a way that still produces an M<R>. We characterized this as the existence of a helper method:

static M<R> ApplySpecialFunction<A, R>(
  M<A> wrapped,
  Func<A, M<R>> function)

Is that it? Not quite. In order to actually be a valid implementation of the monad pattern, these two helper methods need to have a few additional restrictions placed on them, to ensure that they are well-behaved. Specifically: the construction helper function can be thought of as “wrapping up” a value, and the application helper function knows how to “unwrap” a value; it seems reasonable that we require that wrapping and unwrapping operations preserve the value.

Continue reading

Whidbey Island and bagel mathematics (rerun)

Today another in my fun-for-a-Friday series of reruns from the classic days of FAIC. This one was originally published in December of 2004. Enjoy!


299px-Bagels-Montreal-REALI was highly amused to read on Raymond Chen’s blog the other day that mathematicians are hard at work solving the problem of how to most evenly distribute poppyseeds over a bagel. The reason I was highly amused was not just the whimsical description of what is actually a quite practical and difficult problem.

And yes, believe it or not, it is a practical problem; if you can figure out how to distribute points evenly around an arbitrary shape then you can use that information to develop more efficient computer algorithms to solve complex calculus problems that come up in physics all the time. There are also applications in computer graphics, I’d imagine; 3-D modeling frequently requires generating well-behaved finite approximations of a surface.

But I digress. The other reason I was highly amused is that Whidbey Island is the longest island in the United States.

Continue reading

Monads, part five

We are closing in on the actual requirements of the “monad pattern”. So far we’ve seen that for a monadic type M<T>, there must be a simple way to “wrap up” any value of T into an M<T>. And last time we saw that any function that takes an A and returns an R can be applied to an M<A> to produce an M<R> such that both the action of the function and the “amplification” of the monad are preserved, which is pretty cool. It looks like we’re done; what more could you possibly want?

Well, let me throw a spanner into the works here and we’ll see what grinds to a halt. I said that you can take any one-parameter function that has any non-void return type whatsoever, and apply that function to a monad to produce an M<R> for the return type. Any return type whatsoever, eh? OK then. Suppose we have this function of one parameter:

(Again, for expository purposes I am writing the code far less concisely than I normally would, and of course we are ignoring the fact that double already has a “null” value, NaN.)

static Nullable<double> SafeLog(int x)
{
  if (x > 0)
    return new Nullable<double>(Math.Log(x));
  else
    return new Nullable<double>();
}

Seems like a pretty reasonable function of one parameter. This means that we should be able to apply that function to a Nullable<int> and get back out…

oh dear.

Continue reading

Monads, part four

So far we’ve seen that if you have a type that follows the monad pattern, you can always create a “wrapped” value from any value of the “underlying” type. We also showed how five different monadic types enable you to add one to a wrapped integer, and thereby produce a new wrapped integer that preserves the desired “amplification” — nullability, laziness, and so on. Let’s march forth (HA HA HA!) and see if we can generalize the pattern to operations other than adding one to an integer.

Continue reading

Monads, part two

Last time on FAIC I set out to explore monads from an object-oriented programmer’s perspective, rather than delving into the functional programmer’s perspective immediately. The “monad pattern” is a design pattern for types, and a “monad” is a type that uses that pattern. Rather than describing the pattern itself, let’s start by listing some monad-ish types that you are almost certainly very familiar with, and see what they have in common.

These five types are the ones that immediately come to my mind; I am probably missing some. If you have an example of a commonly-used C# type that is monadic in nature, please leave a comment.

  • Nullable<T> — represents a T that could be null
  • Func<T> — represents a T that can be computed on demand
  • Lazy<T> — represents a T that can be computed on demand once, then cached
  • Task<T> — represents a T that is being computed asynchronously and will be available in the future, if it isn’t already
  • IEnumerable<T> — represents an ordered, read-only sequence of zero or more Ts

Continue reading

Monads, part one

Lots of other bloggers have attempted this, but what the heck, I’ll give it a shot too. In this series I’m going to attempt to answer the question:

I’m a C# programmer with no “functional programming” background whatsoever. What is this “monad” thing I keep hearing about, and what use is it to me?

Bloggers often attempt to tackle this problem by jumping straight into the functional programming usage of the term, and start talking about “bind” and “unit” operations, and higher-order functional programming with higher-order types. Even worse is to go all the way back to the category theory underpinning monads and start talking about “monoids in the category endofunctors” and the like. I want to start from a much more pragmatic, object-oriented, C#-type-system focussed place and move towards the rarefied heights of functional programming as we go. Continue reading