Overload resolution is of course the process of taking a bunch of things with the same name and figuring out which of them the user meant. Different languages use different heuristics to try to figure this out. A “heuristic” is just a fancy word for a guess, and I’ve often said that one of the design characteristics of C# is that it is not a “guess what the user meant” kind of language. So if C# is going to make guesses, at least the process by which it does so should be easily explainable to users. Continue reading

# Category Archives: Overload resolution

# ATBG: method type inference with multiple interfaces

Today on the Coverity Development Testing Blog‘s continuing series **Ask The Bug Guys**, I take a question from an “Eric L”, who is confused about one of the subtle rules of method type inference *despite having written the rule himself*. My colleague Jon takes a question from a beginner C programmer about memory allocation.

As always, if you have questions about a bug you’ve found in a C, C++, C# or Java program that you think would make a good episode of ATBG, please send your question along with a small reproducer of the problem to `TheBugGuys@Coverity.com`

. We cannot promise to answer every question or solve every problem, but we’ll take a selection of the best questions that we can answer and address them on the dev testing blog every couple of weeks.

# ATBG: Reflection and default parameters

We have two posts today on the Coverity Development Testing Blog‘s continuing series **Ask The Bug Guys**. First, my colleague Jon explores a tricky difference between the 1989 and 1999 C standards involving conversions of array types to pointer types that can cause undefined behavior if you’re not careful. Then I discuss why Reflection and constructors (or any other method, for that matter) with default parameters do not play nicely with reflection.

Thanks to readers Dennis and Laurence for these interesting questions. If you have a question about a bug in your C, C++, C# or Java program, please send it to TheBugGuys@coverity.com; we’d love to see it. We can’t guarantee an answer to all your problems, but we will pick a selection of the best questions and post about them on the development testing blog. Past episodes can be found here, and the RSS feed for the blog is here.

# A contravariance conundrum

Suppose we have my usual hierarchy of types, `Animal`

, `Giraffe`

, etc, with the obvious type relationships. An `IEqualityComparer<T>`

is contravariant in its type parameter; if we have a device which can compare two `Animal`

s for equality then it can compare two `Giraffe`

s for equality as well. So why does this code fail to compile?

IEqualityComparer<Animal> animalComparer = whatever; IEnumerable<Giraffe> giraffes = whatever; IEnumerable<Giraffe> distinct = giraffes.Distinct(animalComparer);

This illustrates a subtle and slightly unfortunate design choice in the method type inference algorithm, which of course was designed long before covariance and contravariance were added to the language.

# Nullable micro-optimization, part two

I hope you’ve all had a pleasant Christmas; I sure did, though once again I was unable to return to Waterloo region to visit my family. Hopefully I’ll make it for Easter this coming year.

Last time on FAIC I described why calling `GetValueOrDefault()`

instead of `Value`

allows the jitter to generate smaller, faster code. Of course this optimization is first, tiny, and second, only a valid optimization in the case where you are certain that the nullable value is not actually null. Over the next few episodes I’ll describe how the C# compiler uses that fact to generate better code for you, but in order to do that, I first need to talk a bit about lifted arithmetic.

Back in 2007 I described what mathematicians mean by “lifted arithmetic”, and how the C# specification uses this term in a subtly wrong way. It’s been a long time, so here’s a quick refresher. Mathematically, by “lifted” we mean that if there is a function `f : S → S`

, and we make a new set `S' = S ∪ { null }`

, then the lifted function `f' : S' → S'`

is defined as `f'(null) → null, f'(s ∈ S) → f(s)`

. Or, in English, the lifted function gives null when given null, and agrees with the unlifted function otherwise.[1. We then extend the definition of "lifted" to functions of the form `f : S → T`

in the obvious manner: the lifted function is `f' : S' → T'`

. Similarly for functions of two, three or more parameters: the lifted function is null if any argument is null, and agrees with the unlifted function otherwise.]

Lifted arithmetic operators in C# work similarly. In C#, if there is an operator, let’s say the unary `~`

operator that takes an `int `

and produces an `int`

, then there is also a lifted `~`

operator that takes an `int?`

and produces an `int?`

. The lifted operator produces null if given null, and otherwise agrees with the unlifted operator.

Some so-called “lifted” operators do not follow this pattern, but for the purposes of this series we’ll mostly be talking about the ones that do.

I want to make a brief aside here to discuss *how* the C# compiler knows to use a lifted operator in the first place. The answer is straightforward: it uses **overload resolution**. Continuing our example, when you say `~x`

, the compiler pretends that you did a method call `operator~(x)`

and creates a candidate set that consists of “methods” corresponding to the signatures of the user-defined and built-in `~`

operators. If overload resolution produces a unique best applicable operator then it is chosen and the operand is implicitly converted to the “parameter type” of the chosen “operator method”, otherwise the compiler produces an error. That’s an oversimplification; consult the specification for the exact details.[2. Unfortunately, the specification sections on operator overload resolution are not strictly speaking entirely accurate: there are some known discrepancies between the compiler and the specification. In some of these cases the compiler is wrong and in some the specification is wrong. The areas with small discrepancies include (1) precisely when a user-defined operator is considered to be "liftable" and what the resulting semantics are, (2) how the candidate set for operators on enumerated and delegate types are determined, and (3) how the "betterness" rules treat lifted operators. Mads and I have a number of times attempted to come up with better spec language but I don't think the proposed changes made it into the latest revision. I might choose to do blog articles on these interesting and difficult corner cases in the future.] The important fact that will come into play later in this series is that **if overload resolution chooses a lifted operator then the operand is implicitly converted to the nullable type**. Just like how when normal overload resolution chooses a method, the arguments are implicitly converted to the corresponding formal parameter types.

Returning now to the subject at hand: how does the C# compiler generate code for a lifted operator? When you say:

int? y = ~x;

what happens? Let’s suppose that `x`

is a legal expression of type `int?`

, just to keep it easy. Overload resolution determines that the lifted `~`

operator that takes an `int?`

and produces an `int?`

is the unique best applicable operator. The expression is already of the correct type. Now, you might naively think that the compiler would pretend that you’d typed:

int? y = x.HasValue ? ~x.Value : null;

but of course that code is wrong in two ways. First, it doesn’t compile because the type of the conditional operator expression cannot be determined.[3. Astonishingly, I've never written a blog article about this specific aspect of the conditional operator, though it has certainly come up on StackOverflow a lot. This is probably the blog article that came the closest to describing this common problem.] And second, what if the expression `x`

has a side effect? We would not want to generate `int? y = ~M(i++);`

as:

int? y = M(++i).HasValue ? ~M(++i).Value : null;

because then the variable gets incremented twice and the method gets called twice if the result of the first call is not null. And of course the value returned the second time might be different! We can fix these two problems easily enough:

int? y; int? temp = x; y = temp.HasValue ? new int?(~temp.Value) : new int?();

And now we’re good.

At this point the C# compiler can say “but wait a moment! if we are on the “consequence” branch of the conditional operator then we know for sure that `temp.HasValue`

is true. Therefore the compiler can generate the more optimal code:

int? y; int? temp = x; y = temp.HasValue ? new int?(~temp.GetValueOrDefault()) : new int?();

Which is in fact what both the “original recipe” and the “extra crispy Roslyn” compilers do. The savings is tiny, but it is real, and these savings add up as the expressions get more and more complicated, as we’ll see.

**Next time on FAIC:** Is that the only optimization a C# compiler can perform when generating code for lifted arithmetic? Of course not! In the next few episodes we’ll look at some ways the compiler can be more clever, and compare the Roslyn compiler’s heuristics to the previous compiler’s heuristics. Happy New Year all, and we’ll see you in 2013 for more fabulous adventures.

# A method group of one

I’m implementing the semantic analysis of dynamic expressions in Roslyn this week, so I’m fielding a lot of questions within the team on the design of the dynamic feature of C# 4. A question I get fairly frequently in this space is as follows:

public class Alpha { public int Foo(string x) { ... } } ... dynamic d = whatever; Alpha alpha = MakeAlpha(); var result = alpha.Foo(d);

How is this analyzed? More specifically, what’s the type of local `result`

?

If the receiver (that is, `alpha`

) of the call were of type dynamic then there would be little we could do at compile time. We’d analyze the compile-time types of the arguments and emit a dynamic call site that caused the semantic analysis to be performed at runtime, using the runtime type of the dynamic expression. But that’s not the case here. We know at compile time what the type of the receiver is. One of the design principles of the C# dynamic feature is that if we have a type that is known at compile time, then at runtime the type analysis honours that. In other words, we only use the runtime type of the things that were actually dynamic; everything else we use the compile-time type. If `MakeAlpha()`

returns a class derived from `Alpha`

, and that derived class has more overloads of `Foo`

, we don’t care.

Because we know that we’re going to be doing overload resolution on a method called `Foo`

on an instance of type `Alpha`

, we can do a “sanity check” at compile time to determine if we know that for sure, this is going to fail at runtime. So we do overload resolution, but instead of doing the full overload resolution algorithm (eliminate inapplicable candidates, determine the unique best applicable candidate, perform final validation of that candidate), we do a partial overload resolution algorithm. We get as far as eliminating the inapplicable candidates, and if that leaves one or more candidates then the call is bound dynamically. If it leaves zero candidates then we report an error at compile time, because we know that nothing is going to work at runtime.

Now, a seemingly reasonable question to ask at this point is: overload resolution in this case could determine that there is exactly one applicable candidate in the method group, and therefore we can determine statically that the type of `result`

is `int`

, so why do we instead say that the type of `result`

is `dynamic`

?

That appears to be a reasonable question, but think about it a bit more. If you and I and the compiler know that overload resolution is going to choose a particular method then *why are we making a dynamic call in the first place?* Why haven’t we cast `d`

to string? This situation is rare, unlikely, and has an easy workaround by inserting casts appropriately (either casting the call expression to int or the argument to string). Situations that are rare, unlikely and easily worked around are poor candidates for compiler optimizations. You asked for a dynamic call, so you’re going to get a dynamic call.

That’s reason enough to not do the proposed feature, but let’s think about it a bit more deeply by exploring a variation on this scenario that I glossed over above. Eta Corporation produces:

public class Eta {}

and Zeta Corporation extends this code:

public class Zeta : Eta { public int Foo(string x){ ... } } ... dynamic d = whatever; Zeta zeta = new Zeta(); var result = zeta.Foo(d);

Suppose we say that the type of `result`

is `int`

because the method group has only one member. Now suppose that in the next version, Eta Corporation supplies a new method:

public class Eta { public string Foo(double x){...} }

Zeta corporation recompiles their code, and hey presto, suddenly `result`

is of type `dynamic`

! Why should Eta Corporation’s change **to the base class** cause the semantic analysis of code that uses a **derived** class to change? This seems unexpected. C# has been carefully designed to avoid these sorts of “Brittle Base Class” failures; see my other articles on that subject for examples of how we do that.

We can make a bad situation even worse. Suppose Eta’s change is instead:

public class Eta { protected string Foo(double x){...} }

Now what happens? Should we say that the type of `result`

is `int`

when the code appears outside of class Zeta, because overload resolution produces a single applicable candidate, but `dynamic`

when it appears inside, because overload resolution produces two such candidates? That would be quite bizarre indeed.

The proposal is simply too much cleverness in pursuit of too little value. We’ve been asked to perform a dynamic binding, and so we’re going to perform a dynamic binding; the result should in turn be of type `dynamic`

. The benefits of being able to statically deduce types of dynamic expressions does not pay for the costs, so we don’t attempt to do so. **If you want static analysis then don’t turn it off in the first place.**

**Next time on FAIC:** The dynamic taint of method type inference.

# How do we ensure that method type inference terminates?

Here’s a question I got from a coworker recently:

It is obviously important that the C# compiler not go into infinite loops.

How do we ensure that the method type inference algorithm terminates?

The answer is quite straightforward actually, but if you are not familiar with method type inference then this article is going to make no sense. You might want to watch this video if you need a refresher.

Method type inference since C# 3.0 basically works like this: we create a set of **bounds** on each method type parameter. We then “fix” each type parameter to a member of its bounds set. That might enable new bounds to be computed, so this algorithm is a loop. Once every type parameter is fixed, method type inference has succeeded. If any type parameter cannot be fixed for some reason then type inference fails.

We ensure termination like this: **if we manage to make it through the body of the loop without fixing at least one type parameter then type inference fails.** Therefore, the type inference loop can run at most n times if the method has n type parameters. If we make it to n times through the loop then type inference must have fixed a type parameter on every iteration, and type inference has succeeded. If we failed before n times through the loop then obviously type inference did not run forever.

That’s a bit highfalutin; let me flesh that out a bit. A “bound” is nothing more than a type, and a bound can be “upper”, “lower” or “exact”. For example, suppose we have a type parameter `T`

with three bounds: a lower bound of `Giraffe`

, an exact bound of `Mammal`

, and an upper bound of `Animal`

. Let’s say that `Animal`

is a “larger” type than `Mammal`

(because all mammals are animals but not all animals are mammals, thus `Animal`

must be the larger type), and `Giraffe`

is a “smaller” type than `Mammal`

. Given this set of bounds we know that `T`

must be inferred to be first, either `Giraffe`

or a type larger than `Giraffe`

, because `Giraffe`

is a “lower bound”; you can’t infer a type smaller than `Giraffe`

. Second, we know that `T`

must be `Mammal`

, exactly. And third, we know that `T`

must be either `Animal`

or a type smaller than `Animal`

, because `Animal`

is an upper bound. We cannot infer a type larger than `Animal`

. The C# compiler deduces that `Mammal`

is the only type in the set that meets all three requirements, and so `T`

would be fixed to `Mammal`

. If there are multiple types in the set that meet all the requirements (which of course cannot happen if there are any exact bounds!) then we pick the largest such type.[1. Note that this algorithm is consistent with other type inference features in C# in two ways. First, when asked to infer a best type from a set of types, we always choose a type from the set. We never say "well we have `Dog`

and `Cat`

in the set so let's infer `Mammal`

" unless `Mammal`

is itself in the set. Second, when faced with multiple possible "best" types, we pick the largest. There is an argument to be made for picking the smallest, but picking the largest seems to match more people's intuitions of what the right choice is.]

The interesting part of method type inference is how we deal with lambdas. Suppose we have a method `Select<A, R>(I<A>, Func<A, R>)`

where the second argument is `c=>c.Name`

. We say that `A`

is an “input” type parameter and `R`

is an “output” type parameter. (It is of course possible for a type parameter to be both an input and output type parameter!) Furthermore, we say that `R`

“depends on” `A`

, because the type of `A`

could possibly determine the type of `R`

. (Of course the “depends” relationship can be cyclic.)

The type inference algorithm, at a high level, goes like this:

- Add bounds to type parameters based on all non-lambda arguments, and all lambda arguments where the delegate type has no type parameters in its inputs.
- Loop
- Is every type parameter fixed?
- Type inference has succeeded. Terminate the algorithm.

- Is there any lambda argument converted to a delegate type where the inputs of the delegate type are all known and the output type involves an unfixed type parameter?
- Deduce the return type of all such lambdas and make inferences that add bounds to the corresponding delegate’s output types.

- Is there any unfixed, bounded type parameter that does not appear in an output type of a delegate that has unfixed input types?
- Fix all such type parameters and go back to the top of the loop.

- Is there any unfixed, bounded type parameter such that an unfixed type parameter depends on it, directly or indirectly?
- Fix all such type parameters and go back to the top of the loop.

- Is every type parameter fixed?
- If we make it here then we failed to make progress; we have just as many fixed type parameters as we started with. Type inference fails. Terminate the algorithm.

So, for example, if we had `Select(customers, c=>c.Name);`

where `customers`

implements `I<Customer>`

then we start by inferring that `A`

has a lower bound of `Customer`

.[2. Assuming that the type `I<T>`

is covariant in `T`

. If it were contravariant then we would deduce an upper bound, and if it were invariant then we would deduce an exact bound. See my series on variance if that is not clear.] We have no lambda arguments that correspond to formal parameters where the delegate type has no type parameters in its inputs, so we enter the loop.

*Is every type parameter fixed?* No.

*Is there any lambda argument converted to a delegate type where the inputs are known and the output involves an unfixed type parameter? *No. There is a lambda argument converted to a delegate type, and the output involves unfixed type parameter `R`

, but the input type is `A`

and `A`

is not fixed. So we have no inferences to make.

*Is there an unfixed type parameter that has bounds and does not appear in an output type of a delegate that has unfixed input types? *Yes. `A`

has bounds and does not appear as an output type, period.

Therefore we fix `A`

. It has only one bound, `Customer`

, so we fix it to `Customer`

. We have made progress, so we go back to the top of the loop.

*Is every type parameter fixed?* No.

*Is there any lambda argument converted to a delegate type where the inputs are known and the output involves an unfixed type parameter?* Yes! Now we make an inference. `A`

is fixed to `Customer`

, so we add the type of `Customer.Name`

, say, `string`

, as a lower bound to `R`

. Now we must fix something.

*Is there an unfixed type parameter that has bounds and does not appear in an output type of a delegate that has unfixed input types? *Yes. `R`

is unfixed, it has bounds, and it appears as an output type of a delegate that has fixed input types, so it is a candidate for fixing. We fix `R`

to its only bound, `string`

, and start the loop again.

*Is every type parameter fixed?* Yes. We’re done.

This technique of preventing infinite loops by requiring that each loop iteration make progress is quite useful, and clearly in this case it guarantees that the algorithm executes the loop no more times than there are type parameters to fix.

You might wonder if it is therefore the case that method type inference is O(n) in the number of type parameters. It turns out that it is not, for several reasons. First, as a practical matter it only makes sense to determine the asymptotic order of an algorithm if the size of the problem is likely to become large. I’ve never seen a method with more than five type parameters in the wild (aside from the tuple constructors, which are straightforward), and even that is pretty unusual. Most generic methods have one or two type parameters. Second, doing the analysis of the lambdas is the expensive part, and it only really makes sense to analyze the behaviour of the most expensive part. We already know that analyzing lambdas is, worst case, an NP-HARD problem so whether or not method type inference is O(some polynomial) is possibly not that relevant. Third, you’ll notice that in my sketch of the algorithm we have to answer questions like “is there any unfixed type parameter that has an unfixed type parameter that depends on it?” This requires solving a graph-traversal problem, whose asymptotic cost we have not analyzed! I won’t take you through the boring analysis, but suffice to say there could be O(n^{2}) dependency relationships that each cost O(n) to analyze, and we could go through the loop n times, for an extremely unlikely worst case of O(n^{4}). The implementation of this algorithm is actually O(n^{2}) in the common case; because n is likely to be small, as I said, we have not put the effort into more sophisticated algorithms that can solve these graph problems even faster in the asymptotic case.

**Next time on FAIC:** There are functions on integers that grow faster than any function you can write a program to produce. Demonstrating this surprising fact involves beavers. We’ll see why!

# What is late binding?

“Late binding” is one of those computer-sciency terms that, like “strong typing“, or “duck typing” means different things to different people. I thought I might describe what the term means to me.

First off, what is “binding”? We can’t understand what it means to bind late if we don’t know what it is to bind at all. Continue reading

# Optional argument corner cases, part four

Last time we discussed how some people think that an optional argument generates a bunch of overloads that call each other. People also sometimes incorrectly think that

void M(string format, bool b = false) { Console.WriteLine(format, b); }

is actually a syntactic sugar for something morally like:

void M(string format, bool? b) { bool realB = b ?? false; Console.WriteLine(format, realB); }

# Optional argument corner cases, part three

A lot of people seem to think that this:

void M(string x, bool y = false) { ... whatever ... }

is actually a syntactic sugar for the way you used to have to write this in C#, which is:

void M(string x) { M(x, false); } void M(string x, bool y) { ... whatever ... }

But it is not. Continue reading