What is up with transparent identifiers? Part two

This will be my last post before I head off for my annual vacation in Canada; see you again in September for more Fabulous Adventures in Coding!

Last time on FAIC I suggested a rule for translating nested “from” query expressions into a much simpler form than the C# specification requires. Why does the C# specification not use my simplified form?

In fact what I showed yesterday is pretty close to what the LINQ translation rules for SelectMany queries looked like shortly before shipping C# 3.0. The problem with it becomes apparent when you consider the following: Continue reading

About these ads

What is “duck typing”?

Seriously, what is it? It’s not a rhetorical question. I realized this morning that I am totally confused about this.

First off, let me say what I thought “duck typing” was. I thought it was a form of typing.

So what is “typing”? We’ve discussed this before on this blog. (And you might want to check out this post on late binding and this post on strong typing.) To sum up:

Continue reading

ATBG: method type inference with multiple interfaces

Today on the Coverity Development Testing Blog‘s continuing series Ask The Bug Guys, I take a question from an “Eric L”, who is confused about one of the subtle rules of method type inference despite having written the rule himself. My colleague Jon takes a question from a beginner C programmer about memory allocation.

As always, if you have questions about a bug you’ve found in a C, C++, C# or Java program that you think would make a good episode of ATBG, please send your question along with a small reproducer of the problem to TheBugGuys@Coverity.com. We cannot promise to answer every question or solve every problem, but we’ll take a selection of the best questions that we can answer and address them on the dev testing blog every couple of weeks.

A contravariance conundrum

Suppose we have my usual hierarchy of types, Animal, Giraffe, etc, with the obvious type relationships. An IEqualityComparer<T> is contravariant in its type parameter; if we have a device which can compare two Animals for equality then it can compare two Giraffes for equality as well. So why does this code fail to compile?

IEqualityComparer<Animal> animalComparer = whatever;
IEnumerable<Giraffe> giraffes = whatever;
IEnumerable<Giraffe> distinct = giraffes.Distinct(animalComparer);

This illustrates a subtle and slightly unfortunate design choice in the method type inference algorithm, which of course was designed long before covariance and contravariance were added to the language.

Continue reading

Dynamic contagion, part two

This is part two of a two-part series on dynamic contagion. Part one is here.

Last time I discussed how the dynamic type tends to spread through a program like a virus: if an expression of dynamic type “touches” another expression then that other expression often also becomes of dynamic type. Today I want to describe one of the least well understood aspects of method type inference, which also uses a contagion model when dynamic gets involved.

Long-time readers know that method type inference is one of my favourite parts of the C# language; for new readers who might not be familiar with the feature, let me briefly describe it. The idea is that when you have a method, say:

Select<A, R>(IEnumerable<A> items, Func<A, R> projection)

and a call to the method, say:

Select(customers, c=>c.Name)

then we infer that you meant to call:

Select<Customer, string>(customers, c=>c.Name)

rather than making you spell it out. In that case, we would first infer that the list of customers is an IEnumerable<Customer> and therefore the type argument corresponding to A is Customer. From that we would infer that lambda parameter c is of type Customer, and therefore the result of the lambda is string, and therefore type argument corresponding to R is string. This algorithm is already complicated, but when dynamic gets involved, it gets downright weird.

The problem that the language designers faced when deciding how method type inference works with dynamic is exacerbated by our basic design goal for dynamic, that I mentioned two weeks ago: the runtime analysis of a dynamic expression honours all the information that we deduced at compile time. We only use the deduced-at-runtime types for the parts of the expression that were actually dynamic; the parts that were statically typed at compile time remain statically typed at runtime, not dynamically typed. Above we inferred R after we knew A, but what if customers had been of type dynamic? We now have a problem: depending on the runtime type of customers, type inference might succeed dynamically even though it seems like it must fail statically. But if type inference fails statically then the method is not a candidate, and, as we discussed two weeks ago, if the candidate set of a dynamically-dispatched method group is empty then overload resolution fails at compile-time, not at runtime. So it seems that type inference must succeed statically!

What a mess. How do we get out of this predicament? The spec is surprisingly short on details; it says only:

Any type argument that does not depend directly or indirectly on an argument of type dynamic is inferred using [the usual static analysis rules]. The remaining type arguments are unknown. [...] Applicability is checked according to [the usual static analysis rules] ignoring parameters whose types are unknown.[1. That last clause is a bit unclear in two ways. First, it really should say "whose types are in any way unknown". L<unknown> is considered to be an unknown type. Second, along with skipping applicability checking we also skip constraint satisfaction checking. That is, we assume that the runtime construction of L<unknown> will provide a type argument that satisfies all the necessary generic type constraints.]

So what we have here is essentially another type that spreads via a contagion model, the “unknown” type. Just as “possibly infected” is the transitive closure of the exposure relation in simplistic epidemiology, “unknown” is the transitive closure of the “depends on” relation in method type inference.

For example, if we have:

void M<T, U>(T t, L<U> items)

with a call

M(123, dyn);

Then type inference infers that T is int from the first argument. Because the second argument is of dynamic type, and the formal parameter type involves type parameter U, we “taint” U with the “unknown type”.

When a tainted type parameter is “fixed” to its final type argument, we ignore all other bounds that we have computed so far, even if some of the bounds are contradictory, and infer it to be “unknown”. So in this case, type inference would succeed and we would add M<int, unknown> to the candidate set. As noted above, we skip applicability checking for arguments that correspond to parameters whose types are in any way tainted.

But where does the transitive closure of the dependency relationship come into it? In the C# 4 and 5 compilers we did not handle this particularly well, but in Roslyn we now actually cause the taint to spread. Suppose we have:

void M<T, U, V>(T t, L<U> items, Func<T, U, V> func)

and a call

M(123, dyn, (t, u)=>u.Whatever(t));

We infer T to be int and U to be unknown. We then say that V depends on T and U, and so infer V to be unknown as well. Therefore type inference succeeds with an inference of M<int, unknown, unknown>.

The alert reader will at this point be protesting that no matter what happens with method type inference, this is going to turn into a dynamic call, and that lambdas are not legal in dynamic calls in the first place. However, we want to get as much high-quality analysis done as possible so that IntelliSense and other code analysis works correctly even in badly broken code. It is better to allow U to infect V with the “unknown taint” and have type inference succeed, as the specification indicates, than to bail out early and have type inference fail. And besides, if by some miracle we do in the future allow lambdas to be in dynamic calls, we’ll already have a sensible implementation of method type inference.

This is part two of a two-part series on dynamic contagion. Part one is here.

How do we ensure that method type inference terminates?

Here’s a question I got from a coworker recently:

It is obviously important that the C# compiler not go into infinite loops. How do we ensure that the method type inference algorithm terminates?

The answer is quite straightforward actually, but if you are not familiar with method type inference then this article is going to make no sense. You might want to watch this video if you need a refresher.

Method type inference since C# 3.0 basically works like this: we create a set of bounds on each method type parameter. We then “fix” each type parameter to a member of its bounds set. That might enable new bounds to be computed, so this algorithm is a loop. Once every type parameter is fixed, method type inference has succeeded. If any type parameter cannot be fixed for some reason then type inference fails.

We ensure termination like this: if we manage to make it through the body of the loop without fixing at least one type parameter then type inference fails. Therefore, the type inference loop can run at most n times if the method has n type parameters. If we make it to n times through the loop then type inference must have fixed a type parameter on every iteration, and type inference has succeeded. If we failed before n times through the loop then obviously type inference did not run forever.

That’s a bit highfalutin; let me flesh that out a bit. A “bound” is nothing more than a type, and a bound can be “upper”, “lower” or “exact”. For example, suppose we have a type parameter T with three bounds: a lower bound of Giraffe, an exact bound of Mammal, and an upper bound of Animal. Let’s say that Animal is a “larger” type than Mammal (because all mammals are animals but not all animals are mammals, thus Animal must be the larger type), and Giraffe is a “smaller” type than Mammal. Given this set of bounds we know that T must be inferred to be first, either Giraffe or a type larger than Giraffe, because Giraffe is a “lower bound”; you can’t infer a type smaller than Giraffe. Second, we know that T must be Mammal, exactly. And third, we know that T must be either Animal or a type smaller than Animal, because Animal is an upper bound. We cannot infer a type larger than Animal. The C# compiler deduces that Mammal is the only type in the set that meets all three requirements, and so T would be fixed to Mammal. If there are multiple types in the set that meet all the requirements (which of course cannot happen if there are any exact bounds!) then we pick the largest such type.[1. Note that this algorithm is consistent with other type inference features in C# in two ways. First, when asked to infer a best type from a set of types, we always choose a type from the set. We never say "well we have Dog and Cat in the set so let's infer Mammal" unless Mammal is itself in the set. Second, when faced with multiple possible "best" types, we pick the largest. There is an argument to be made for picking the smallest, but picking the largest seems to match more people's intuitions of what the right choice is.]

The interesting part of method type inference is how we deal with lambdas. Suppose we have a method Select<A, R>(I<A>, Func<A, R>) where the second argument is c=>c.Name. We say that A is an “input” type parameter and R is an “output” type parameter. (It is of course possible for a type parameter to be both an input and output type parameter!) Furthermore, we say that R “depends on” A, because the type of A could possibly determine the type of R. (Of course the “depends” relationship can be cyclic.)

The type inference algorithm, at a high level, goes like this:

  • Add bounds to type parameters based on all non-lambda arguments, and all lambda arguments where the delegate type has no type parameters in its inputs.
  • Loop
    • Is every type parameter fixed?
      • Type inference has succeeded. Terminate the algorithm.
    • Is there any lambda argument converted to a delegate type where the inputs of the delegate type are all known and the output type involves an unfixed type parameter?
      • Deduce the return type of all such lambdas and make inferences that add bounds to the corresponding delegate’s output types.
    • Is there any unfixed, bounded type parameter that does not appear in an output type of a delegate that has unfixed input types?
      • Fix all such type parameters and go back to the top of the loop.
    • Is there any unfixed, bounded type parameter such that an unfixed type parameter depends on it, directly or indirectly?
      • Fix all such type parameters and go back to the top of the loop.
  • If we make it here then we failed to make progress; we have just as many fixed type parameters as we started with. Type inference fails. Terminate the algorithm.

So, for example, if we had Select(customers, c=>c.Name); where customers implements I<Customer> then we start by inferring that A has a lower bound of Customer.[2. Assuming that the type I<T> is covariant in T. If it were contravariant then we would deduce an upper bound, and if it were invariant then we would deduce an exact bound. See my series on variance if that is not clear.] We have no lambda arguments that correspond to formal parameters where the delegate type has no type parameters in its inputs, so we enter the loop.

Is every type parameter fixed? No.

Is there any lambda argument converted to a delegate type where the inputs are known and the output involves an unfixed type parameter? No. There is a lambda argument converted to a delegate type, and the output involves unfixed type parameter R, but the input type is A and A is not fixed. So we have no inferences to make.

Is there an unfixed type parameter that has bounds and does not appear in an output type of a delegate that has unfixed input types? Yes. A has bounds and does not appear as an output type, period.

Therefore we fix A. It has only one bound, Customer, so we fix it to Customer. We have made progress, so we go back to the top of the loop.

Is every type parameter fixed? No.

Is there any lambda argument converted to a delegate type where the inputs are known and the output involves an unfixed type parameter? Yes! Now we make an inference. A is fixed to Customer, so we add the type of Customer.Name, say, string, as a lower bound to R. Now we must fix something.

Is there an unfixed type parameter that has bounds and does not appear in an output type of a delegate that has unfixed input types? Yes. R is unfixed, it has bounds, and it appears as an output type of a delegate that has fixed input types, so it is a candidate for fixing. We fix R to its only bound, string, and start the loop again.

Is every type parameter fixed? Yes. We’re done.

This technique of preventing infinite loops by requiring that each loop iteration make progress is quite useful, and clearly in this case it guarantees that the algorithm executes the loop no more times than there are type parameters to fix.

You might wonder if it is therefore the case that method type inference is O(n) in the number of type parameters. It turns out that it is not, for several reasons. First, as a practical matter it only makes sense to determine the asymptotic order of an algorithm if the size of the problem is likely to become large. I’ve never seen a method with more than five type parameters in the wild (aside from the tuple constructors, which are straightforward), and even that is pretty unusual. Most generic methods have one or two type parameters. Second, doing the analysis of the lambdas is the expensive part, and it only really makes sense to analyze the behaviour of the most expensive part. We already know that analyzing lambdas is, worst case, an NP-HARD problem so whether or not method type inference is O(some polynomial) is possibly not that relevant. Third, you’ll notice that in my sketch of the algorithm we have to answer questions like “is there any unfixed type parameter that has an unfixed type parameter that depends on it?” This requires solving a graph-traversal problem, whose asymptotic cost we have not analyzed! I won’t take you through the boring analysis, but suffice to say there could be O(n2) dependency relationships that each cost O(n) to analyze, and we could go through the loop n times, for an extremely unlikely worst case of O(n4). The implementation of this algorithm is actually O(n2) in the common case; because n is likely to be small, as I said, we have not put the effort into more sophisticated algorithms that can solve these graph problems even faster in the asymptotic case.

Next time on FAIC: There are functions on integers that grow faster than any function you can write a program to produce. Demonstrating this surprising fact involves beavers. We’ll see why!

Foolish consistency is foolish

Once again today’s posting is presented as a dialogue, as is my wont.

Why is var sometimes required on an implicitly-typed local variable and sometimes illegal on an implicitly typed local variable?

That’s a good question but can you make it more precise? Start by listing the situations in which an implicitly-typed local variable either must or must not use var.

Sure. An implicitly-typed local variable must be declared with var in the following statements:

var x1 = whatever; 
for(var x2 = whatever; ; ) {} 
using(var x3 = whatever) {} 
foreach(var x4 in whatever) {}

And an implicitly-typed local variable must not be declared with var in the following expressions:

from c in customers select c.Name
customers.Select(c => c.Name)

In both cases it is not legal to put var before c, though it would be legal to say:

from Customer c in customers select c.Name 
customers.Select((Customer c) => c.Name)

Why is that?

Well, let me delay answering that by criticizing your question further. In the query expression and lambda expression cases, are those in fact implicitly typed locals in the first place?

Hmm, you’re right; technically neither of those cases have local variables. In the lambda case, that is a formal parameter. But a formal parameter behaves almost exactly like a local variable, so it seems reasonable to conflate the two in casual conversation. In the query expression, the compiler is going to syntactically transform the range variable into an untyped lambda formal parameter regardless of whether the range variable is typed or not.

Can you expand on that last point a bit?

Sure. When you say from Customer c in customers select c.Name that is not transformed by the compiler into customers.Select((Customer c) => c.Name) Rather, it is transformed into customers.Cast<Customer>().Select(c => c.Name)

Correct. Discussing why that is might be better left for another day.

Indeed; the point here is that regardless of whether a type appears in the query expression, the lambda expression in the transformed code is going to have an untyped formal parameter.

So now that we’ve clarified the situation, what is your question?

C# is inconsistent; var is required on an implicitly-typed local variable (regardless of the “container” of the local declaration), but var is illegal on an implicitly-typed lambda parameter (regardless of whether the lambda parameter is “natural” or is the result of a query expression transformation). Why is that?

You keep on asking “why?” questions, which I find difficult to answer because your question is secretly a “why not?” question. That is, the question you really mean to ask is “I have a notion of how the language obviously ought to have been designed; why is it not like that?”  But since I do not know what your notion is, it’s hard to compare its pros and cons to the actual feature that you find inconsistent.

The problem you are raising is one of inconsistency; I agree that you have identified a bona fide inconsistency, and I agree that a foolish, unnecessary inconsistency is bad design. The C# language is designed to be easy to comprehend; foolish inconsistencies work against comprehensibility. But what I don’t understand is how you think that inconsistency ought to have been addressed.

I can see three ways to address that inconsistency. First, make var required on lambdas and query expressions, so that it is consistently required. Second, make var illegal on all locals, so that it is consistently illegal. And third, make it optional everywhere. What is the real “why not?” question?

You’re right; I’ve identified an inconsistency but have not described how I think that inconsistency could be removed. I don’t know what my real “why not?” is so let’s look at all of them; what are the pros and cons of each?

Let’s start by considering the first: require var everywhere. That would then mean that you have to write:

from var c in customers 
join var o in orders...

instead of

from c in customers 
join o in orders...

And you have to write

customers.Select((var c) => c.Name)

instead of

customers.Select(c => c.Name)

This seems clunky. What function does adding var in these locations serve? It does not make the code any more readable; it makes it less readable. We have purchased consistency at a high cost here. The first option seems untenable.

Now consider the second option: make var illegal on all locals. That means that your earlier uses of var on locals would become:

x1 = whatever; 
for(x2 = whatever; ; ) {} 
using(x3 = whatever) {} 
foreach(x4 in whatever) {}

The last case presents no problem; we know that the variable declared in a foreach loop is always a new local variable. But in the other three cases we have just added a new feature to the language; we now have not just implicitly typed locals, we now have implicitly declared locals. Now all you need to do to declare a new local is to assign to an identifier you haven’t used before.

There are plenty of languages with implicitly declared locals, but it seems like a very “un-C#-ish” feature. That’s the sort of thing we see in languages like VB and VBScript, and even in them you have to make sure that Option Explicit is turned off. The price we pay for consistency here is very different, but it is still very high. I don’t think we want to pay that price.

The third option — make var optional everywhere– is just a special case of the second option; again, we end up introducing implicitly declared locals.

Design is the art of compromising amongst various incompatible design goals. In this case, consistency is a goal but it loses to more practical concerns. This would be a foolish consistency.

Is the fact that there is no good way out of this inconsistency an indication that there is a deeper design problem here?

Yes. When I gave three options for removing the inconsistency, you’ll notice that I made certain assumptions about what was and was not allowed to change. If we were willing to make larger changes, or if the original design team had made different decisions in C# 1.0, then we wouldn’t be in this mess in the first place. The deeper design problem here is that the fact that a local variable declaration has the form

type identifier ;

This is not a great statement syntax in the first place. Suppose C# 1.0 had instead said that a local variable must be declared like this:

var identifier : type ;

I see where you are going; JScript.NET does use that syntax, and makes the type annotation clause optional. And of course Visual Basic uses the moral equivalent of that syntax, with Dim instead of var and As instead of :.

That’s right. So in typing-required C# 1.0 we would have

var x1 : int = whatever; 
for(var x2 : int = whatever; ; ) {} 
using(var x3 : IDisposable = whatever) {} 
foreach(var x4 : int in whatever) {}

This is somewhat more verbose, yes. But this is very easy to parse, both by the compiler and the reader, and is probably more clear to the novice reader. It is crystal clear that a new local variable is being introduced by a statement. And it is also nicely reminiscent of base class declaration, which is a logically similar annotation.[1. You could have just as easily asked "Why does the constraining type come to the left of the identifier in a local, parameter, field, property, event, method and indexer, and to the right of the identifier in a class, struct, interface and type parameter?" Inconsistencies abound!]

Then in C# 3.0 we could introduce implicitly typed locals by simply making the annotation portion optional, and allowing all of the following statements and expressions:

var x1 = whatever; 
for(var x2  = whatever; ; ) {} 
using(var x3  = whatever) {} 
foreach(var x4 in whatever) {}
from c in customers select c.Name 
from c : Customer in customers select c.Name 
customers.Select(c => c.Name) 
customers.Select((c : Customer) => c.Name)

If this syntax has nicer properties then why didn’t you go with it in C# 1.0?

Because the designers wanted the syntax to be familiar to users of C and C++. So here we have one kind of consistency — consistency of experience for C programmers — leading a decade later to a problem of C# self-consistency.

The moral of the story is: good design requires being either impossibly far-sighted, or accepting that you’re going to have to live with some small inconsistencies as a language evolves over decades.

Next time on FAIC: The best advice I ever got.