Fabulous adventures

Hello world, this is the new home of Fabulous Adventures in Coding. (The previous site is here.) Long-time readers will need no introduction, but if you are new here, please check out this short bio.

Today, November 29th 2012, is as I noted in my final post on the MSDN blog, my second-last day at Microsoft. After tomorrow I will be taking the next few weeks off and not thinking about programming languages for once. And after that, I’m starting a new gig in 2013 at Coverity.

Most of you probably have not heard of Coverity, but you have almost certainly used software that was affected by their tools. Coverity makes static analysis tools for software developers; these tools analyze source code written in C, C++, Java and C# and tell you about correctness and security issues before they ship to customers. Among their high-profile customers are the Jet Propulsion Lab team that wrote the software for the Curiosity rovers now running around on Mars and the software team for the Large Hadron Collider, which recently confirmed the existence of the Higgs Boson. They also serve more down-to-earth customers; it’s not all weird science.

As an expert on the design and implementation of static analyzers for C# code — because, after all, that’s what the compiler is! — the opportunity to work in downtown Seattle on a small team to improve the C# analysis product was too good to pass up. And so here I am, continuing to try to improve the tools available for C# programmers.

Though I am no longer an “insider” on the C# design team, I intend to continue to blog about the design and implementation of C#, as well as other fabulous adventures in coding. If this sort of thing interests you, please subscribe to the RSS feed at ericlippert.com/feed, and please follow me on Twitter where I am @ericlippert.

Once I’m back from my short vacation we’ll get right back into it. Thanks for reading, and I look forward to sharing more fabulous adventures with you.


Next time on FAIC: Why are the bracing rules inconsistent in C#?

About these ads

Why is deriving a public class from an internal class illegal?

In C# it is illegal to declare a class D whose base class B is in any way less accessible than D. I’m occasionally asked why that is. There are a number of reasons; today I’ll start with a very specific scenario and then talk about a general philosophy.

Suppose you and your coworker Alice are developing the code for assembly Foo, which you intend to be fully trusted by its users. Alice writes:

public class B
{
  public void Dangerous() {...}
}

And you write

public class D : B
{
  ... other stuff ...
}

Later, Alice gets a security review from Bob, who points out that method Dangerous could be used as a component of an attack by partially-trusted code, and who further points out that customer scenarios do not actually require B to be used directly by customers in the first place; B is actually only being used as an implementation detail of other classes. So in keeping with the principle of least privilege, Alice changes B to:

internal class B
{
  public void Dangerous() {...}
}

Alice need not change the accessibility of Dangerous, because of course public means “public to the people who can see the class in the first place”.

So now what should happen when Alice recompiles before she checks in this change? The C# compiler does not know if you, the author of class D, intended method Dangerous to be accessible by a user of public class D. On the one hand, it is a public method of a base class, and so it seems like it should be accessible. On the other hand, the fact that B is internal is evidence that Dangerous is supposed to be inaccessible outside the assembly. A basic design principle of C# is that when the intention is unclear, the compiler brings this fact to your attention by failing. The compiler is identifying yet another form of the Brittle Base Class Failure, which long-time readers know has shown up in numerous places in the design of C#.

Rather than simply making this change and hoping for the best, you and Alice need to sit down and talk about whether B really is a sensible base class of D; it seems plausible that either (1) D ought to be internal also, or (2) D ought to favour composition over inheritance. Which brings us to my more general point:

More generally: the inheritance mechanism is, as we’ve discussed before, simply the fact that all heritable members of the base type are also members of the derived type. But the inheritance relationship semantics are intended to model the “is a kind of” relationship. It seems reasonable that if D is a kind of B, and D is accessible at a location, then B ought to be accessible at that location as well. It seems strange that you could only use the fact that “a Giraffe is a kind of Animal” at specific locations.

In short, this rule of the language encourages you to use inheritance relationships to model the business domain semantics rather than as a mechanism for code reuse.

Finally, I note that as an alternative, it is legal for a public class to implement an internal interface. In that scenario there is no danger of accidentally exposing dangerous functionality from the interface to the implementing type because of course an interface is not associated with any functionality in the first place; an interface is logically “abstract”. Implementing an internal interface can be used as a mechanism that allows public components in the same assembly to communicate with each other over “back channels” that are not exposed to the public.

Dynamic contagion, part two

This is part two of a two-part series on dynamic contagion. Part one is here.


Last time I discussed how the dynamic type tends to spread through a program like a virus: if an expression of dynamic type “touches” another expression then that other expression often also becomes of dynamic type. Today I want to describe one of the least well understood aspects of method type inference, which also uses a contagion model when dynamic gets involved.

Long-time readers know that method type inference is one of my favourite parts of the C# language; for new readers who might not be familiar with the feature, let me briefly describe it. The idea is that when you have a method, say:

Select<A, R>(IEnumerable<A> items, Func<A, R> projection)

and a call to the method, say:

Select(customers, c=>c.Name)

then we infer that you meant to call:

Select<Customer, string>(customers, c=>c.Name)

rather than making you spell it out. In that case, we would first infer that the list of customers is an IEnumerable<Customer> and therefore the type argument corresponding to A is Customer. From that we would infer that lambda parameter c is of type Customer, and therefore the result of the lambda is string, and therefore type argument corresponding to R is string. This algorithm is already complicated, but when dynamic gets involved, it gets downright weird.

The problem that the language designers faced when deciding how method type inference works with dynamic is exacerbated by our basic design goal for dynamic, that I mentioned two weeks ago: the runtime analysis of a dynamic expression honours all the information that we deduced at compile time. We only use the deduced-at-runtime types for the parts of the expression that were actually dynamic; the parts that were statically typed at compile time remain statically typed at runtime, not dynamically typed. Above we inferred R after we knew A, but what if customers had been of type dynamic? We now have a problem: depending on the runtime type of customers, type inference might succeed dynamically even though it seems like it must fail statically. But if type inference fails statically then the method is not a candidate, and, as we discussed two weeks ago, if the candidate set of a dynamically-dispatched method group is empty then overload resolution fails at compile-time, not at runtime. So it seems that type inference must succeed statically!

What a mess. How do we get out of this predicament? The spec is surprisingly short on details; it says only:

Any type argument that does not depend directly or indirectly on an argument of type dynamic is inferred using [the usual static analysis rules]. The remaining type arguments are unknown. [...] Applicability is checked according to [the usual static analysis rules] ignoring parameters whose types are unknown.[1. That last clause is a bit unclear in two ways. First, it really should say "whose types are in any way unknown". L<unknown> is considered to be an unknown type. Second, along with skipping applicability checking we also skip constraint satisfaction checking. That is, we assume that the runtime construction of L<unknown> will provide a type argument that satisfies all the necessary generic type constraints.]

So what we have here is essentially another type that spreads via a contagion model, the “unknown” type. Just as “possibly infected” is the transitive closure of the exposure relation in simplistic epidemiology, “unknown” is the transitive closure of the “depends on” relation in method type inference.

For example, if we have:

void M<T, U>(T t, L<U> items)

with a call

M(123, dyn);

Then type inference infers that T is int from the first argument. Because the second argument is of dynamic type, and the formal parameter type involves type parameter U, we “taint” U with the “unknown type”.

When a tainted type parameter is “fixed” to its final type argument, we ignore all other bounds that we have computed so far, even if some of the bounds are contradictory, and infer it to be “unknown”. So in this case, type inference would succeed and we would add M<int, unknown> to the candidate set. As noted above, we skip applicability checking for arguments that correspond to parameters whose types are in any way tainted.

But where does the transitive closure of the dependency relationship come into it? In the C# 4 and 5 compilers we did not handle this particularly well, but in Roslyn we now actually cause the taint to spread. Suppose we have:

void M<T, U, V>(T t, L<U> items, Func<T, U, V> func)

and a call

M(123, dyn, (t, u)=>u.Whatever(t));

We infer T to be int and U to be unknown. We then say that V depends on T and U, and so infer V to be unknown as well. Therefore type inference succeeds with an inference of M<int, unknown, unknown>.

The alert reader will at this point be protesting that no matter what happens with method type inference, this is going to turn into a dynamic call, and that lambdas are not legal in dynamic calls in the first place. However, we want to get as much high-quality analysis done as possible so that IntelliSense and other code analysis works correctly even in badly broken code. It is better to allow U to infect V with the “unknown taint” and have type inference succeed, as the specification indicates, than to bail out early and have type inference fail. And besides, if by some miracle we do in the future allow lambdas to be in dynamic calls, we’ll already have a sensible implementation of method type inference.


This is part two of a two-part series on dynamic contagion. Part one is here.

Dynamic contagion, part one

This is part one of a two-part series on dynamic contagion. Part two is here.


Suppose you’re an epidemiologist modeling the potential spread of a highly infectious disease. The straightforward way to model such a series of unfortunate events is to assume that the population can be divided into three sets: the definitely infected, the definitely healthy, and the possibly infected. If a member of the healthy population encounters a member of the definitely infected or possibly infected population, then they become a member of the possibly infected population. (Or, put another way, the possibly infected population is closed transitively over the exposure relation.) A member of the possibly infected population becomes classified as either definitely healthy or definitely infected when they undergo some sort of test. And an infected person can become a healthy person by being cured.

This sort of contagion model is fairly common in the design of computer systems. For example, suppose you have a web site that takes in strings from users, stores them in a database, and serves them up to other users. Like, say, this blog, which takes in comments from you, stores them in a database, and then serves them right back up to other users. That’s a Cross Site Scripting (XSS) attack waiting to happen right there. A common way to mitigate the XSS problem is to use data tainting, which uses the contagion model to identify strings that are possibly hostile. Whenever you do anything to a potentially-hostile string, like, say, concatenate it with a non-hostile string, the result is a possibly-hostile string. If the string is determined via some test to be benign, or can have its potentially hostile parts stripped out, then it becomes safe.

The “dynamic” feature in C# 4 and above has a lot in common with these sorts of contagion models. As I pointed out last time, when an argument of a call is dynamic then odds are pretty good that the compiler will classify the result of the call as dynamic as well; the taint spreads. In fact, when you use almost any operator on a dynamic expression, the result is of dynamic type, with a few exceptions. (“is” for example always returns a bool.)  You can “cure” an expression to prevent it spreading dynamicism by casting it to object, or to whatever other non-dynamic type you’d like; casting dynamic to object is an identity conversion.

The way that dynamic is contagious is an emergent phenomenon of the rules for working out the types of expressions in C#. There is, however, one place where we explicitly use a contagion model inside the compiler in order to correctly work out the type of an expression that involves dynamic types: it is one of the most arcane aspects of method type inference. Next time I’ll give you all the rundown on that.


This is part one of a two-part series on dynamic contagion. Part two is here.