Wizards and warriors, part five

We’ve been struggling in the last four episodes to encode the rules of our business domain — which, recall, could be wizards and warriors or papers and paycheques or whatever — into the C# type system. The tool we’ve chosen seems to be resisting our attempts, and so maybe it’s a good time to take a step back and ask if we’re on the right track in the first place.

62103645

The fundamental idea in the first and second episodes was use the type system to detect and prevent violations of the rules of the business domain at compile time. That effort has largely failed, due to the difficulty of representing a subtype with a restriction, like “a Wizard is a Player that cannot use a Sword. In several of our attempts we ended up throwing exceptions, so that the rule was enforced by the runtime rather than the compiler. What is the nature of this exception?
Continue reading

Wizards and warriors, part four

Last time we saw that in order to decide what code to call based on the runtime type of one argument — single dispatch — we could use virtual dispatch. And we saw that we could use the inaptly-named Visitor Pattern to emulate double dispatch by doing a series of virtual and non-virtual dispatches. This works but it has some drawbacks. It’s heavyweight, the pattern is difficult to understand, and it doesn’t extend easily to true multiple dispatch.

I said last time that C# does not support double dispatch. That was a baldfaced lie! In fact C# supports multiple dispatch; you can dispatch a method call based on the runtime types of arbitrarily many arguments. Here, let’s dispatch based on the runtime types of two arguments:
Continue reading

Wizards and warriors, part three

So let’s digress for a few episodes here. We’ll temporarily leave aside the problem of how we can have both a Player that has a Weapon and a Wizard that has a Staff. (Or Dagger.) Supposing that we can figure out how to get that all represented, here’s another problem. Suppose we’ve also got Werewolf and Vampire classes that are a kind of Monster. We want a rule that says that if a Warrior tries to hit a Werewolf after midnight then the probability of success is lowered. (Wizards have no such penalty because… magic? Work with me here.)

Wait a moment — isn’t it always after midnight? When could you safely feed a mogwai, anyway?
Continue reading

What is up with transparent identifiers? Part two

This will be my last post before I head off for my annual vacation in Canada; see you again in September for more Fabulous Adventures in Coding!


Last time on FAIC I suggested a rule for translating nested “from” query expressions into a much simpler form than the C# specification requires. Why does the C# specification not use my simplified form?

In fact what I showed yesterday is pretty close to what the LINQ translation rules for SelectMany queries looked like shortly before shipping C# 3.0. The problem with it becomes apparent when you consider the following: Continue reading

Closer is better

Overload resolution is of course the process of taking a bunch of things with the same name and figuring out which of them the user meant. Different languages use different heuristics to try to figure this out. A “heuristic” is just a fancy word for a guess, and I’ve often said that one of the design characteristics of C# is that it is not a “guess what the user meant” kind of language. So if C# is going to make guesses, at least the process by which it does so should be easily explainable to users. Continue reading

ATBG: method type inference with multiple interfaces

Today on the Coverity Development Testing Blog‘s continuing series Ask The Bug Guys, I take a question from an “Eric L”, who is confused about one of the subtle rules of method type inference despite having written the rule himself. My colleague Jon takes a question from a beginner C programmer about memory allocation.

As always, if you have questions about a bug you’ve found in a C, C++, C# or Java program that you think would make a good episode of ATBG, please send your question along with a small reproducer of the problem to TheBugGuys@Coverity.com. We cannot promise to answer every question or solve every problem, but we’ll take a selection of the best questions that we can answer and address them on the dev testing blog every couple of weeks.

ATBG: Reflection and default parameters

We have two posts today on the Coverity Development Testing Blog‘s continuing series Ask The Bug Guys. First, my colleague Jon explores a tricky difference between the 1989 and 1999 C standards involving conversions of array types to pointer types that can cause undefined behavior if you’re not careful. Then I discuss why Reflection and constructors (or any other method, for that matter) with default parameters do not play nicely with reflection.

Thanks to readers Dennis and Laurence for these interesting questions. If you have a question about a bug in your C, C++, C# or Java program, please send it to TheBugGuys@coverity.com; we’d love to see it. We can’t guarantee an answer to all your problems, but we will pick a selection of the best questions and post about them on the development testing blog. Past episodes can be found here, and the RSS feed for the blog is here.

A contravariance conundrum

Suppose we have my usual hierarchy of types, Animal, Giraffe, etc, with the obvious type relationships. An IEqualityComparer<T> is contravariant in its type parameter; if we have a device which can compare two Animals for equality then it can compare two Giraffes for equality as well. So why does this code fail to compile?

IEqualityComparer<Animal> animalComparer = whatever;
IEnumerable<Giraffe> giraffes = whatever;
IEnumerable<Giraffe> distinct = giraffes.Distinct(animalComparer);

This illustrates a subtle and slightly unfortunate design choice in the method type inference algorithm, which of course was designed long before covariance and contravariance were added to the language.

Continue reading

Nullable micro-optimization, part two

I hope you’ve all had a pleasant Christmas; I sure did, though once again I was unable to return to Waterloo region to visit my family. Hopefully I’ll make it for Easter this coming year.

Last time on FAIC I described why calling GetValueOrDefault() instead of Value allows the jitter to generate smaller, faster code. Of course this optimization is first, tiny, and second, only a valid optimization in the case where you are certain that the nullable value is not actually null. Over the next few episodes I’ll describe how the C# compiler uses that fact to generate better code for you, but in order to do that, I first need to talk a bit about lifted arithmetic.

Back in 2007 I described what mathematicians mean by “lifted arithmetic”, and how the C# specification uses this term in a subtly wrong way. It’s been a long time, so here’s a quick refresher. Mathematically, by “lifted” we mean that if there is a function f : S → S, and we make a new set S' = S ∪ { null }, then the lifted function f' : S' → S' is defined as f'(null) → null, f'(s ∈ S) → f(s). Or, in English, the lifted function gives null when given null, and agrees with the unlifted function otherwise.

We then extend the definition of “lifted” to functions of the form f : S → T in the obvious manner: the lifted function is f' : S' → T'. Similarly for functions of two, three or more parameters: the lifted function is null if any argument is null, and agrees with the unlifted function otherwise.

Lifted arithmetic operators in C# work similarly. In C#, if there is an operator, let’s say the unary ~ operator that takes an int and produces an int, then there is also a lifted ~ operator that takes an int? and produces an int?. The lifted operator produces null if given null, and otherwise agrees with the unlifted operator.

Some so-called “lifted” operators do not follow this pattern, but for the purposes of this series we’ll mostly be talking about the ones that do.


I want to make a brief aside here to discuss how the C# compiler knows to use a lifted operator in the first place. The answer is straightforward: it uses overload resolution.

Continuing our example, when you say ~x, the compiler pretends that you did a method call operator~(x) and creates a candidate set that consists of “methods” corresponding to the signatures of the user-defined and built-in ~ operators. If overload resolution produces a unique best applicable operator then it is chosen and the operand is implicitly converted to the “parameter type” of the chosen “operator method”, otherwise the compiler produces an error. That’s an oversimplification; consult the specification for the exact details.

Unfortunately, the specification sections on operator overload resolution are not strictly speaking entirely accurate: there are some known discrepancies between the compiler and the specification. In some of these cases the compiler is wrong and in some the specification is wrong. The areas with small discrepancies include (1) precisely when a user-defined operator is considered to be “liftable” and what the resulting semantics are, (2) how the candidate set for operators on enumerated and delegate types are determined, and (3) how the “betterness” rules treat lifted operators.

Mads and I have a number of times attempted to come up with better spec language but I don’t think the proposed changes made it into the latest revision. I might choose to do blog articles on these interesting and difficult corner cases in the future.

The important fact that will come into play later in this series is that if overload resolution chooses a lifted operator then the operand is implicitly converted to the nullable type. Just like how when normal overload resolution chooses a method, the arguments are implicitly converted to the corresponding formal parameter types.


Returning now to the subject at hand: how does the C# compiler generate code for a lifted operator? When you say:

int? y = ~x;

what happens? Let’s suppose that x is a legal expression of type int?, just to keep it easy. Overload resolution determines that the lifted ~ operator that takes an int? and produces an int? is the unique best applicable operator. The expression is already of the correct type. Now, you might naively think that the compiler would pretend that you’d typed:

int? y = x.HasValue ? ~x.Value : null;

but of course that code is wrong in two ways.

First, it doesn’t compile because the type of the conditional operator expression cannot be determined.

Astonishingly, I’ve never written a blog article about this specific aspect of the conditional operator, though it has certainly come up on StackOverflow a lot. This is probably the blog article that came the closest to describing this common problem.

And second, what if the expression x has a side effect? We would not want to generate

int? y = ~M(++i);

as:

int? y = M(++i).HasValue ? ~M(++i).Value : null;

because then the variable gets incremented twice and the method gets called twice if the result of the first call is not null. And of course the value returned the second time might be different! We can fix these two problems easily enough:

int? y;
int? temp = x;
y = temp.HasValue ? new int?(~temp.Value) : new int?();

And now we’re good.

At this point the C# compiler can say “but wait a moment! if we are on the “consequence” branch of the conditional operator then we know for sure that temp.HasValue is true. Therefore the compiler can generate the more optimal code:

int? y;
int? temp = x;
y = temp.HasValue ? new int?(~temp.GetValueOrDefault()) : new int?();

Which is in fact what both the “original recipe” and the “extra crispy Roslyn” compilers do. The savings is tiny, but it is real, and these savings add up as the expressions get more and more complicated, as we’ll see.


Next time on FAIC: Is that the only optimization a C# compiler can perform when generating code for lifted arithmetic? Of course not! In the next few episodes we’ll look at some ways the compiler can be more clever, and compare the Roslyn compiler’s heuristics to the previous compiler’s heuristics. Happy New Year all, and we’ll see you in 2013 for more fabulous adventures.

A method group of one

I’m implementing the semantic analysis of dynamic expressions in Roslyn this week, so I’m fielding a lot of questions within the team on the design of the dynamic feature of C# 4. A question I get fairly frequently in this space is as follows:

public class Alpha
{
  public int Foo(string x) { ... }
}
  ...
  dynamic d = whatever;
  Alpha alpha = MakeAlpha();
  var result = alpha.Foo(d);

How is this analyzed? More specifically, what’s the type of local result?

If the receiver (that is, alpha) of the call were of type dynamic then there would be little we could do at compile time. We’d analyze the compile-time types of the arguments and emit a dynamic call site that caused the semantic analysis to be performed at runtime, using the runtime type of the dynamic expression. But that’s not the case here. We know at compile time what the type of the receiver is. One of the design principles of the C# dynamic feature is that if we have a type that is known at compile time, then at runtime the type analysis honours that. In other words, we only use the runtime type of the things that were actually dynamic; everything else we use the compile-time type. If MakeAlpha() returns a class derived from Alpha, and that derived class has more overloads of Foo, we don’t care.

Because we know that we’re going to be doing overload resolution on a method called Foo on an instance of type Alpha, we can do a “sanity check” at compile time to determine if we know that for sure, this is going to fail at runtime. So we do overload resolution, but instead of doing the full overload resolution algorithm (eliminate inapplicable candidates, determine the unique best applicable candidate, perform final validation of that candidate), we do a partial overload resolution algorithm. We get as far as eliminating the inapplicable candidates, and if that leaves one or more candidates then the call is bound dynamically. If it leaves zero candidates then we report an error at compile time, because we know that nothing is going to work at runtime.

Now, a seemingly reasonable question to ask at this point is: overload resolution in this case could determine that there is exactly one applicable candidate in the method group, and therefore we can determine statically that the type of result is int, so why do we instead say that the type of result is dynamic?

That appears to be a reasonable question, but think about it a bit more. If you and I and the compiler know that overload resolution is going to choose a particular method then why are we making a dynamic call in the first place? Why haven’t we cast d to string? This situation is rare, unlikely, and has an easy workaround by inserting casts appropriately (either casting the call expression to int or the argument to string). Situations that are rare, unlikely and easily worked around are poor candidates for compiler optimizations. You asked for a dynamic call, so you’re going to get a dynamic call.

That’s reason enough to not do the proposed feature, but let’s think about it a bit more deeply by exploring a variation on this scenario that I glossed over above. Eta Corporation produces:

public class Eta {}

and Zeta Corporation extends this code:

public class Zeta : Eta
{
  public int Foo(string x){ ... }
}
  ...
  dynamic d = whatever;
  Zeta zeta = new Zeta();
  var result = zeta.Foo(d);

Suppose we say that the type of result is int because the method group has only one member. Now suppose that in the next version, Eta Corporation supplies a new method:

public class Eta
{
  public string Foo(double x){...}
}

Zeta corporation recompiles their code, and hey presto, suddenly result is of type dynamic! Why should Eta Corporation’s change to the base class cause the semantic analysis of code that uses a derived class to change? This seems unexpected. C# has been carefully designed to avoid these sorts of “Brittle Base Class” failures; see my other articles on that subject for examples of how we do that.

We can make a bad situation even worse. Suppose Eta’s change is instead:

public class Eta
{
  protected string Foo(double x){...}
}

Now what happens? Should we say that the type of result is int when the code appears outside of class Zeta, because overload resolution produces a single applicable candidate, but dynamic when it appears inside, because overload resolution produces two such candidates? That would be quite bizarre indeed.

The proposal is simply too much cleverness in pursuit of too little value. We’ve been asked to perform a dynamic binding, and so we’re going to perform a dynamic binding; the result should in turn be of type dynamic. The benefits of being able to statically deduce types of dynamic expressions does not pay for the costs, so we don’t attempt to do so. If you want static analysis then don’t turn it off in the first place.


Next time on FAIC: The dynamic taint of method type inference.