A method group of one

I’m implementing the semantic analysis of dynamic expressions in Roslyn this week, so I’m fielding a lot of questions within the team on the design of the dynamic feature of C# 4. A question I get fairly frequently in this space is as follows:

public class Alpha
  public int Foo(string x) { ... }
  dynamic d = whatever;
  Alpha alpha = MakeAlpha();
  var result = alpha.Foo(d);

How is this analyzed? More specifically, what’s the type of local result?

If the receiver (that is, alpha) of the call were of type dynamic then there would be little we could do at compile time. We’d analyze the compile-time types of the arguments and emit a dynamic call site that caused the semantic analysis to be performed at runtime, using the runtime type of the dynamic expression. But that’s not the case here. We know at compile time what the type of the receiver is. One of the design principles of the C# dynamic feature is that if we have a type that is known at compile time, then at runtime the type analysis honours that. In other words, we only use the runtime type of the things that were actually dynamic; everything else we use the compile-time type. If MakeAlpha() returns a class derived from Alpha, and that derived class has more overloads of Foo, we don’t care.

Because we know that we’re going to be doing overload resolution on a method called Foo on an instance of type Alpha, we can do a “sanity check” at compile time to determine if we know that for sure, this is going to fail at runtime. So we do overload resolution, but instead of doing the full overload resolution algorithm (eliminate inapplicable candidates, determine the unique best applicable candidate, perform final validation of that candidate), we do a partial overload resolution algorithm. We get as far as eliminating the inapplicable candidates, and if that leaves one or more candidates then the call is bound dynamically. If it leaves zero candidates then we report an error at compile time, because we know that nothing is going to work at runtime.

Now, a seemingly reasonable question to ask at this point is: overload resolution in this case could determine that there is exactly one applicable candidate in the method group, and therefore we can determine statically that the type of result is int, so why do we instead say that the type of result is dynamic?

That appears to be a reasonable question, but think about it a bit more. If you and I and the compiler know that overload resolution is going to choose a particular method then why are we making a dynamic call in the first place? Why haven’t we cast d to string? This situation is rare, unlikely, and has an easy workaround by inserting casts appropriately (either casting the call expression to int or the argument to string). Situations that are rare, unlikely and easily worked around are poor candidates for compiler optimizations. You asked for a dynamic call, so you’re going to get a dynamic call.

That’s reason enough to not do the proposed feature, but let’s think about it a bit more deeply by exploring a variation on this scenario that I glossed over above. Eta Corporation produces:

public class Eta {}

and Zeta Corporation extends this code:

public class Zeta : Eta
  public int Foo(string x){ ... }
  dynamic d = whatever;
  Zeta zeta = new Zeta();
  var result = zeta.Foo(d);

Suppose we say that the type of result is int because the method group has only one member. Now suppose that in the next version, Eta Corporation supplies a new method:

public class Eta
  public string Foo(double x){...}

Zeta corporation recompiles their code, and hey presto, suddenly result is of type dynamic! Why should Eta Corporation’s change to the base class cause the semantic analysis of code that uses a derived class to change? This seems unexpected. C# has been carefully designed to avoid these sorts of “Brittle Base Class” failures; see my other articles on that subject for examples of how we do that.

We can make a bad situation even worse. Suppose Eta’s change is instead:

public class Eta
  protected string Foo(double x){...}

Now what happens? Should we say that the type of result is int when the code appears outside of class Zeta, because overload resolution produces a single applicable candidate, but dynamic when it appears inside, because overload resolution produces two such candidates? That would be quite bizarre indeed.

The proposal is simply too much cleverness in pursuit of too little value. We’ve been asked to perform a dynamic binding, and so we’re going to perform a dynamic binding; the result should in turn be of type dynamic. The benefits of being able to statically deduce types of dynamic expressions does not pay for the costs, so we don’t attempt to do so. If you want static analysis then don’t turn it off in the first place.

Next time on FAIC: The dynamic taint of method type inference.

Is C# a strongly typed or a weakly typed language?

Presented as a dialogue, as is my wont!

Is C# a strongly typed or a weakly typed language?


That is unhelpful.

I don’t doubt it. Interestingly, if you rephrased the question as an “and” question, the answer would be the same.

What? You mean, is C# a strongly typed and a weakly typed language?

Yes, C# is a strongly typed language and a weakly typed language.

I’m confused.

Me too. Perhaps you should tell me precisely what you mean by “strongly typed” and “weakly typed”.

Um. I don’t actually know what I mean by those terms, so perhaps that is the question I should be asking. What does it really mean for a language to be “weakly typed” or “strongly typed”?

“Weakly typed” means “this language uses a type verification system that I find distasteful“, and “strongly typed” means “this language uses a type system that I find attractive“.

No way!

Way, dude.


These terms are meaningless and you should avoid them. Wikipedia lists eleven different meanings for “strongly typed”, several of which contradict each other. Any time two people use “strongly typed” or “weakly typed” in a conversation about programming languages, odds are good that they have two subtly or grossly different meanings in their heads for those terms, and are therefore automatically talking past each other.

But surely they mean something other than “unattractive” or “attractive”!

I do exaggerate somewhat for comedic effect. So lets say: a more-strongly-typed language is one that has somerestriction in its type system that a more-weakly-typed language it is being compared to lacks. That’s all you can really say without more context.

How can I have sensible conversations about languages and their type systems then?

You can provide the missing context. Instead of using “strongly typed” and “weakly typed”, actually describe the restriction you mean. For example, C# is for the most part a statically typed language, because the compiler determines facts about the types of every expression. C# is for the most part a type safe language because it prevents values of one static type from being stored in variables of an incompatible type (and other similar type errors). And C# is for the most part memory safe language because it prevents accidental access to bad memory.

Thus, someone who thinks that “strongly typed” means “the language encourages static typing, type safety and memory safety in the vast majority of normal programs” would classify C# as a “strongly typed” language. C# is certainly more strongly typed than languages that do not have these restrictions in their type systems.

But here’s the thing: because C# is a pragmatic language there is a way to override all three of those safety systems. Cast operators and “dynamic” in C# 4 override compile-time type checking and replace it with runtime type checking, and “unsafe” blocks allow you to turn off type safety and memory safety should you need to. Someone who thinks that “strongly typed” means “the language absolutely positively guarantees static typing, type safety and memory safety under all circumstances” would quite rightly classify C# as “weakly typed”. C# is not as strongly typed as languages that do enforce these restrictions all the time.

So which is it, strong or weak? It is impossible to say because it depends on the point of view of the speaker, it depends on what they are comparing it to, and it depends on their attitude towards various language features. It’s therefore best to simply avoid these terms altogether, and speak more precisely about type system features.

Next time on FAIC: What happens when a dynamic call’s method group has a single member?

High altitude

No computer programming stuff today; just some fun for Friday.

As I’m writing this Felix Baumgartner’s attempt to set the world record for skydiving height by diving from a helium balloon has been scrubbed due to bad weather. This attempt has got me thinking of my good friend JB, who back in 1982 set the world record[1. It’s in the 1988 Guinness Book of World Records.] for hang gliding height by similarly using a helium balloon.

JB is one of those people who proves the truth of the saying that you really can do anything you put your mind to, as he’s been a world-record breaking hang glider pilot, skydiver, balloonist, airplane pilot, ultra-marathon runner, shuttle astronaut candidate[2. His microgravity experiment ended up flying on the Vomit Comet rather than the shuttle.], upper-atmosphere physicist, microgravity physicist, nuclear physicist, father, and I’m probably missing a dozen more accomplishments in there. And teacher! When I was a child he taught me useful skills like how to estimate large numbers, how to do trigonometry, and how to do calculus, usually by pointing out things on the beach and then doing math in the sand, like Archimedes. How many grains of sand are on this beach? How far away is the horizon when you stand on the roof of the cottage? What shape path does this rock make in the air when you throw it? These sorts of questions fascinated me as a child, and, I suppose, still do.

Anyway, I recently learned that JB has uploaded the short film his brother Bims made to document the successful attempt at the record. Check it out, and enjoy the hairstyles of the 1980s.

Does not compute

One of the most basic ways to think about a computer program is that it is a device which takes in integers as inputs and spits out integers as outputs. The C# compiler, for example, takes in source code strings, and those source code strings are essentially nothing more than enormous binary numbers. The output of the compiler is either diagnostic text, or strings of IL and metadata, which are also just enormous binary numbers.  Because the compiler is not perfect, in some rare cases it terminates abnormally with an internal error message. But those fatal error messages are also just big binary numbers. So let’s take this as our basic model of a computer program: a computer program is a device that either (1) runs forever without producing output, or (2) computes a function that maps one integer to another.

So here’s an interesting question: are there functions which cannot be computed, even in principle on a machine with arbitrarily much storage, by any C# program?[1. Of course, there is nothing special about C#; it is a general-purpose programming language. We’ll take as given that if there is a function that cannot be computed in C# then that function cannot be computed by any program in any programming language.]

We already know the answer to that question. Last year I pointed out that the Halting Problem is not solvable by any computer program, because the assumption that it is solvable leads to a logical contradiction. But the Halting Problem is just a function on integers. Let’s say that the input of our function H is a number which when written out in binary is a Unicode string that might contain a C# program. The output is 1 if the program is an illegal C# program, 2 if it is a legal C# program which halts, and 3 if it is a legal C# program which does not halt. If it were possible to write a program that reliably computes function H and always terminates then it would be possible to use it to solve the Halting Problem, which we’ve shown is impossible. Therefore H is not a computable function.

Let’s explore this a bit further. The “Turing Machine” model of computing is that a computer is a machine that has three kinds of storage: first, there’s a fixed amount of “internal” storage that describes the current state of the processor, second, there is arbitrarily much “external” storage in the form of paper tape, disk drives, or whatever, that can contain binary data, and third, there is some way of identifying the “current position” being manipulated in the external storage. The Turing Machine also has strict rules that describe how to change the internal state, the external state, and the current position. One of the internal states is the “start” state, and one of the internal states is the “halt” state; once the machine gets to the halting state, it stops. Otherwise, it runs forever.

Without loss of generality, let’s suppose that our Turing Machine’s external storage is arbitrarily many bits, either zero or one, and that the internal storage is some fixed number of bits, say n. This is pretty restrictive, but we haven’t actually lost anything fundamental here. Real computers of course give the appearance of manipulating storage that consists of 32 bit integers or 64 bit doubles or whatever, but at some level inside the processor, it is manipulating individual bits. There is no difference in principle between a machine that manipulates one bit at a time and a machine that manipulates 64 bits at a time; the latter is simply more convenient.

So then how many rules do we need to come up with for our Turing machine? A Turing machine with n bits of internal state has 2n possible states, and there are two possibilities for the value at the “current position” in the external state.[2. Of course, we don’t need to state transitions from the halting state. We’ll ignore that unimportant detail.] So that means that there are 2n+1 state transition rules. Each transition rule will have to encode three things: (1) what are the n bits of the new internal state? (2) what value should the external state be changed to? and (3) how should we update the current position?

Again without loss of generality, we can update the current position by decreasing it by one, increasing it by one, or leaving it the same. And in fact, we can even eliminate the “leave it the same” mechanism! In practice that is inconvenient, but in principle always increasing or decreasing by one is enough. So those are two possibilities for how the position changes. Thus, each state transition rule is one of 2n+2 possible rules. There are 2n+1 state transition rules. Therefore the total number of possible Turing Machines that have n bits of internal storage is 2n+2 raised to the 2n+1 power, which, yes, grows pretty quickly as n gets large, but which is clearly a finite number.

Each one of these n-bit Turing Machines essentially computes a function. You start it up with the external storage in a particular state and the machine either runs forever, or after some finite number of steps it halts. If it halts, then the output of the function is the value left behind in the external storage.

Again without loss of generality, let’s consider the value computed by each one of those possible Turning machines when the external storage is initially all zeros. When given that starting configuration, each of those Turing machines either runs for some number of steps and then halts with the result, or it runs forever. Let’s ignore the ones that run forever. Of the ones that are left, the ones that terminate, one of them must run the longest.[3. Of course, there could be a tie for longest, but that doesn’t matter.] That is, one of those machines that halts must have the largest number of steps taken before entering the halting state.

We therefore can come up with a function S that goes from integers to integers. The function S takes in n, the number of bits in the Turing Machine internal state, and gives you back the largest number of steps any of the possible n-bit Turing Machines that halts takes to halt. That is, S takes in the number of bits of internal storage and gives you back the amount of time you have to wait for the longest-running of the n-bit machines that actually terminates, when it is started with empty external storage.

Is S a computable function? Can we write a computer program that computes it?

Your intuition should be telling you “no”, but do you see why?









Because if S were computable then H would be computable too! All we’d have to do to compute H is to make a computer program that compiles a given C# program into a Turing Machine simulator that starts with an empty tape. We take the number of bits of state, n, of that Turing Machine, and compute S(n) in finite time. Then we run the Turing Machine simulator and if it takes more than S(n) steps then we know that it must have been one of the n-bit Turing machines that runs forever. We’d then be able to reliably compute H in finite time. Since we already know that H is not reliably computable in finite time then we know that S must not be computable either.

The argument that I’m advancing here is known as the “Busy Beaver” argument because the n-bit Turing Machine that runs the longest is the “busiest beaver”. I’ve tweaked the way that it is usually presented; rather than the n-bit Turing Machine that runs the longest before terminating, the “busiest beaver” is traditionally defined as the k-state Turing Machine that produces the largest output. The two characterizations are essentially equivalent though; neither version of the function is computable.

An interesting fact about the busy beaver function (either way you characterize it) is that the function grows enormously fast. It’s easy to think of functions that grow quickly; even simple functions like n! or 2n grow to astronomical levels for relatively small values of n, like 100. But our busiest beaver function S(n) grows faster than any computable function. That is, think of a function that grows quickly where you could write a program to compute its value in finite time; the busiest beaver function grows faster than your function, no matter how clever you are in coming up with a fast-growing function. Do you see why? You’ve got all the information you need here to work it out.[4. Of course, even if the busiest beaver function did not grow absurdly quickly, the fact that it clearly grows more than exponentially is evidence that our proposed technique for solving the Halting Problem would be impractical were it not impossible. Compiling a non-trivial C# program to a Turing Machine simulator would undoubtedly produce a machine with more than, say, 100 bits of state. There are an enormous number of possible Turing Machines with 100 bits of internal state, and the one that runs the longest before it halts undoubtedly runs longer than the universe will last.]

Next time on FAIC: Severe hang time!

How do we ensure that method type inference terminates?

Here’s a question I got from a coworker recently:

It is obviously important that the C# compiler not go into infinite loops. How do we ensure that the method type inference algorithm terminates?

The answer is quite straightforward actually, but if you are not familiar with method type inference then this article is going to make no sense. You might want to watch this video if you need a refresher. Continue reading