A method group of one

I’m implementing the semantic analysis of dynamic expressions in Roslyn this week, so I’m fielding a lot of questions within the team on the design of the dynamic feature of C# 4. A question I get fairly frequently in this space is as follows:

public class Alpha
{
  public int Foo(string x) { ... }
}
  ...
  dynamic d = whatever;
  Alpha alpha = MakeAlpha();
  var result = alpha.Foo(d);

How is this analyzed? More specifically, what’s the type of local result?

If the receiver (that is, alpha) of the call were of type dynamic then there would be little we could do at compile time. We’d analyze the compile-time types of the arguments and emit a dynamic call site that caused the semantic analysis to be performed at runtime, using the runtime type of the dynamic expression. But that’s not the case here. We know at compile time what the type of the receiver is. One of the design principles of the C# dynamic feature is that if we have a type that is known at compile time, then at runtime the type analysis honours that. In other words, we only use the runtime type of the things that were actually dynamic; everything else we use the compile-time type. If MakeAlpha() returns a class derived from Alpha, and that derived class has more overloads of Foo, we don’t care.

Because we know that we’re going to be doing overload resolution on a method called Foo on an instance of type Alpha, we can do a “sanity check” at compile time to determine if we know that for sure, this is going to fail at runtime. So we do overload resolution, but instead of doing the full overload resolution algorithm (eliminate inapplicable candidates, determine the unique best applicable candidate, perform final validation of that candidate), we do a partial overload resolution algorithm. We get as far as eliminating the inapplicable candidates, and if that leaves one or more candidates then the call is bound dynamically. If it leaves zero candidates then we report an error at compile time, because we know that nothing is going to work at runtime.

Now, a seemingly reasonable question to ask at this point is: overload resolution in this case could determine that there is exactly one applicable candidate in the method group, and therefore we can determine statically that the type of result is int, so why do we instead say that the type of result is dynamic?

That appears to be a reasonable question, but think about it a bit more. If you and I and the compiler know that overload resolution is going to choose a particular method then why are we making a dynamic call in the first place? Why haven’t we cast d to string? This situation is rare, unlikely, and has an easy workaround by inserting casts appropriately (either casting the call expression to int or the argument to string). Situations that are rare, unlikely and easily worked around are poor candidates for compiler optimizations. You asked for a dynamic call, so you’re going to get a dynamic call.

That’s reason enough to not do the proposed feature, but let’s think about it a bit more deeply by exploring a variation on this scenario that I glossed over above. Eta Corporation produces:

public class Eta {}

and Zeta Corporation extends this code:

public class Zeta : Eta
{
  public int Foo(string x){ ... }
}
  ...
  dynamic d = whatever;
  Zeta zeta = new Zeta();
  var result = zeta.Foo(d);

Suppose we say that the type of result is int because the method group has only one member. Now suppose that in the next version, Eta Corporation supplies a new method:

public class Eta
{
  public string Foo(double x){...}
}

Zeta corporation recompiles their code, and hey presto, suddenly result is of type dynamic! Why should Eta Corporation’s change to the base class cause the semantic analysis of code that uses a derived class to change? This seems unexpected. C# has been carefully designed to avoid these sorts of “Brittle Base Class” failures; see my other articles on that subject for examples of how we do that.

We can make a bad situation even worse. Suppose Eta’s change is instead:

public class Eta
{
  protected string Foo(double x){...}
}

Now what happens? Should we say that the type of result is int when the code appears outside of class Zeta, because overload resolution produces a single applicable candidate, but dynamic when it appears inside, because overload resolution produces two such candidates? That would be quite bizarre indeed.

The proposal is simply too much cleverness in pursuit of too little value. We’ve been asked to perform a dynamic binding, and so we’re going to perform a dynamic binding; the result should in turn be of type dynamic. The benefits of being able to statically deduce types of dynamic expressions does not pay for the costs, so we don’t attempt to do so. If you want static analysis then don’t turn it off in the first place.


Next time on FAIC: The dynamic taint of method type inference.

Is C# a strongly typed or a weakly typed language?

Presented as a dialogue, as is my wont!

Is C# a strongly typed or a weakly typed language?

Yes.

That is unhelpful.

I don’t doubt it. Interestingly, if you rephrased the question as an “and” question, the answer would be the same.

What? You mean, is C# a strongly typed and a weakly typed language?

Yes, C# is a strongly typed language and a weakly typed language.

I’m confused.

Me too. Perhaps you should tell me precisely what you mean by “strongly typed” and “weakly typed”.

Um. I don’t actually know what I mean by those terms, so perhaps that is the question I should be asking. What does it really mean for a language to be “weakly typed” or “strongly typed”?

“Weakly typed” means “this language uses a type verification system that I find distasteful“, and “strongly typed” means “this language uses a type system that I find attractive“.

No way!

Way, dude.

Really?

These terms are meaningless and you should avoid them. Wikipedia lists eleven different meanings for “strongly typed”, several of which contradict each other. Any time two people use “strongly typed” or “weakly typed” in a conversation about programming languages, odds are good that they have two subtly or grossly different meanings in their heads for those terms, and are therefore automatically talking past each other.

But surely they mean something other than “unattractive” or “attractive”!

I do exaggerate somewhat for comedic effect. So lets say: a more-strongly-typed language is one that has somerestriction in its type system that a more-weakly-typed language it is being compared to lacks. That’s all you can really say without more context.

How can I have sensible conversations about languages and their type systems then?

You can provide the missing context. Instead of using “strongly typed” and “weakly typed”, actually describe the restriction you mean. For example, C# is for the most part a statically typed language, because the compiler determines facts about the types of every expression. C# is for the most part a type safe language because it prevents values of one static type from being stored in variables of an incompatible type (and other similar type errors). And C# is for the most part memory safe language because it prevents accidental access to bad memory.

Thus, someone who thinks that “strongly typed” means “the language encourages static typing, type safety and memory safety in the vast majority of normal programs” would classify C# as a “strongly typed” language. C# is certainly more strongly typed than languages that do not have these restrictions in their type systems.

But here’s the thing: because C# is a pragmatic language there is a way to override all three of those safety systems. Cast operators and “dynamic” in C# 4 override compile-time type checking and replace it with runtime type checking, and “unsafe” blocks allow you to turn off type safety and memory safety should you need to. Someone who thinks that “strongly typed” means “the language absolutely positively guarantees static typing, type safety and memory safety under all circumstances” would quite rightly classify C# as “weakly typed”. C# is not as strongly typed as languages that do enforce these restrictions all the time.

So which is it, strong or weak? It is impossible to say because it depends on the point of view of the speaker, it depends on what they are comparing it to, and it depends on their attitude towards various language features. It’s therefore best to simply avoid these terms altogether, and speak more precisely about type system features.


Next time on FAIC: What happens when a dynamic call’s method group has a single member?

High altitude

No computer programming stuff today; just some fun for Friday.

As I’m writing this Felix Baumgartner’s attempt to set the world record for skydiving height by diving from a helium balloon has been scrubbed due to bad weather. This attempt has got me thinking of my good friend JB, who back in 1982 set the world record[1. It’s in the 1988 Guinness Book of World Records.] for hang gliding height by similarly using a helium balloon.

JB is one of those people who proves the truth of the saying that you really can do anything you put your mind to, as he’s been a world-record breaking hang glider pilot, skydiver, balloonist, airplane pilot, ultra-marathon runner, shuttle astronaut candidate[2. His microgravity experiment ended up flying on the Vomit Comet rather than the shuttle.], upper-atmosphere physicist, microgravity physicist, nuclear physicist, father, and I’m probably missing a dozen more accomplishments in there. And teacher! When I was a child he taught me useful skills like how to estimate large numbers, how to do trigonometry, and how to do calculus, usually by pointing out things on the beach and then doing math in the sand, like Archimedes. How many grains of sand are on this beach? How far away is the horizon when you stand on the roof of the cottage? What shape path does this rock make in the air when you throw it? These sorts of questions fascinated me as a child, and, I suppose, still do.

Anyway, I recently learned that JB has uploaded the short film his brother Bims made to document the successful attempt at the record. Check it out, and enjoy the hairstyles of the 1980s.

Does not compute

One of the most basic ways to think about a computer program is that it is a device which takes in integers as inputs and spits out integers as outputs. The C# compiler, for example, takes in source code strings, and those source code strings are essentially nothing more than enormous binary numbers. The output of the compiler is either diagnostic text, or strings of IL and metadata, which are also just enormous binary numbers.  Because the compiler is not perfect, in some rare cases it terminates abnormally with an internal error message. But those fatal error messages are also just big binary numbers. So let’s take this as our basic model of a computer program: a computer program is a device that either (1) runs forever without producing output, or (2) computes a function that maps one integer to another.
Continue reading

How do we ensure that method type inference terminates?

Here’s a question I got from a coworker recently:

It is obviously important that the C# compiler not go into infinite loops. How do we ensure that the method type inference algorithm terminates?

The answer is quite straightforward actually, but if you are not familiar with method type inference then this article is going to make no sense. You might want to watch this video if you need a refresher. Continue reading