The best advice I ever got

The super nice people over at InformIT[1. You may recall that they also recently asked me for my advice on good books for C# programmers. In the interests of full disclosure, I note that in my spare time I write and edit C# programming books for Addison-Wesley, which is owned by the same company that owns InformIT.] are running a series of short articles with the theme “the best advice I ever got”, which I think should prove to be an interesting series. They were kind enough to ask me to give an example of some good advice I got that helped my programming career, though as you’ll see it is not actually about programming at all.

Next time on FAIC: When is a cast not a cast?

Foolish consistency is foolish

Once again today’s posting is presented as a dialogue, as is my wont.

Why is var sometimes required on an implicitly-typed local variable and sometimes illegal on an implicitly typed local variable?

That’s a good question but can you make it more precise? Start by listing the situations in which an implicitly-typed local variable either must or must not use var.

Sure. An implicitly-typed local variable must be declared with var in the following statements:

Continue reading

Eric rambles on about C#, again

Rachel Roumeliotis, who amongst other things edits C# books for O’Reilly, recently did an interview with me where I ramble on about async/await, Roslyn, performance analysis as an engineering discipline, and some broad-strokes ideas for future language research areas. If you have sixteen minutes to burn, check it out! The O’Reilly Radar blog post is here, and the video has also been posted to YouTube here.

A couple things to mention here; first, I say in the video that we’ve shipped one preview release of Roslyn; in fact we have shipped two. The video was recorded before we had announced the new release. And second, I want to re-emphasize that the end bit where you get more of Eric’s musings about ideas for future language research areas are for your entertainment. We have not announced any product beyond Roslyn, and we are certainly making no promises whatsoever about the feature sets of unannounced, entirely hypothetical products. Enjoy!

Implementation-defined behaviour

As I’ve mentioned several times on this blog before, C# has been carefully designed to eliminate some of the “undefined behaviour” and “implementation-defined behaviour” that you see in languages like C and C++. But I’m getting ahead of myself; I should probably start by making a few definitions.

Traditionally we say that a programming language idiom has undefined behaviour if use of that idiom can have any effect whatsoever; it can work the way you expect it to or it can erase your hard disk or crash your machine. Moreover, the compiler author is under no obligation to warn you about the undefined behaviour. (And in fact, there are some languages in which programs that use “undefined behaviour” idioms are permitted by the language specification to crash the compiler!)

An example of undefined behaviour in C, C++ and C# is writing to a dereferenced pointer that you have no business writing to. Doing so might cause a fault that crashes the process. But the memory could be valid, and it could be an important data structure of the runtime software. It could be code that you are now overwriting with code that does something else. And hence, anything can happen; you’re rewriting the software that is running right now into software that does something else.

By contrast, an idiom that has implementation-defined behaviour is behaviour where the compiler author has several choices about how to implement the feature, and must choose one. As the name implies, implementation-defined behaviour is at least defined. For example, C# permits an implementation to throw an exception or produce a value when an integer division overflows, but the implementation must pick one. It cannot erase your hard disk.

All that said, for the rest of this article I’m not going to make a strong distinction between these two flavours of underspecified behaviour. The question I’m actually interested in addressing today is:

What are some of the factors that lead a language design committee to leave certain language idioms as undefined or implementation-defined behaviours?

The first major factor is: are there two existing implementations of the language in the marketplace that disagree on the behaviour of a particular program? If FooCorp’s compiler compiles M(A(), B()) as “call A, call B, call M”, and BarCorp’s compiler compiles it as “call B, call A, call M”, and neither is the “obviously correct” behaviour then there is strong incentive to the language design committee to say “you’re both right”, and make it implementation defined behaviour. Particularly this is the case if FooCorp and BarCorp both have representatives on the committee.

The next major factor is: does the feature naturally present many different possibilities for implementation, some of which are clearly better than others? For example, in C# the compiler’s analysis of a “query comprehension” expression is specified as “do a syntactic transformation into an equivalent program that does not have query comprehensions, and then analyze that program normally”. There is very little freedom for an implementation to do otherwise. For example, you and I both know that

from c in customers 
from o in orders 
where c.Id == o.CustomerId 
select new {c, o}


from c in customers 
join o in orders 
  on c.Id equals o.CustomerId 
select new {c, o}

are semantically the same, and that the latter is likely to be more efficient. But the C# compiler never, ever turns the first query expression syntax into a call to Join; it always turns it into calls to SelectMany and Where. The runtime implementation of those methods is of course fully within its rights to detect that the object returned by SelectMany is being passed to Where, and to build an optimized join if it sees fit, but the C# compiler does not make any assumptions like that. We wanted the query comprehension transformation to be syntactic; the smart optimizations can be in the runtime.

By contrast, the C# specification says that the foreach loop should be treated as the equivalent while loop inside a try block, but allows the implementation some flexibility. A C# compiler is permitted to say, for example “I know how to implement the loop semantics more efficiently over an array” and use the array’s indexing feature rather than converting the array to a sequence as the specification suggests it should. A C# implementation is permitted to skip calling GetEnumerator.

A third factor is: is the feature so complex that a detailed breakdown of its exact behaviour would be difficult or expensive to specify? The C# specification says very little indeed about how anonymous methods, lambda expressions, expression trees, dynamic calls, iterator blocks and async blocks are to be implemented; it merely describes the desired semantics and some restrictions on behaviour, and leaves the rest up to the implementation. Different implementations could reasonably do different codegen here and still get good behaviours.

A fourth factor is: does the feature impose a high burden on the compiler to analyze? For example, in C# if you have:

Func<int, int> f1 = (int x)=>x + 1; 
Func<int, int> f2 = (int x)=>x + 1; 
bool b = object.ReferenceEquals(f1, f2);

Suppose we were to require b to be true. How are you going to determine when two functions are “the same”? Doing an “intensionality” analysis — do the function bodies have the same content? — is hard, and doing an “extensionality” analysis — do the functions have the same results when given the same inputs? — is even harder. A language specification committee should seek to minimize the number of open research problems that an implementation team has to solve! In C# this is therefore left to be implementation-defined; a compiler can choose to make them reference equal or not at its discretion.

A fifth factor is: does the feature impose a high burden on the runtime environment?

For example, in C# dereferencing past the end of an array is well-defined; it produces an array-index-was-out-of-bounds exception. This feature can be implemented with a small — not zero, but small — cost at runtime. Calling an instance or virtual method with a null receiver is defined as producing a null-was-dereferenced exception; again, this can be implemented with a small, but non-zero cost. The benefit of eliminating the undefined behaviour pays for the small runtime cost. But the cost of determining if an arbitrary pointer in unsafe code is safe to dereference would have a large runtime cost, and so we do not do it; we move the burden of making that determination to the developer, who, after all, is the one who turned off the safety system in the first place.

A sixth factor is: does making the behaviour defined preclude some major optimization? For example, C# defines the ordering of side effects when observed from the thread that causes the side effects. But the behaviour of a program that observes side effects of one thread from another thread is implementation-defined except for a few “special” side effects. (Like a volatile write, or entering a lock.) If the C# language required that all threads observe the same side effects in the same order then we would have to restrict modern processors from doing their jobs efficiently; modern processors depend on out-of-order execution and sophisticated caching strategies to obtain their high level of performance.

Those are just a few factors that come to mind; there are of course many, many other factors that language design committees debate before making a feature “implementation defined” or “undefined”.

Persistence, façades and Roslyn’s red-green trees

We decided early in the Roslyn design process that the primary data structure that developers would use when analyzing code via Roslyn is the syntax tree. And thus one of the hardest parts of the early Roslyn design was figuring out how we were going to implement syntax tree nodes, and what information they would proffer up to the user. We would like to have a data structure that has the following characteristics:

  • Immutable.
  • The form of a tree.
  • Cheap access to parent nodes from child nodes.
  • Possible to map from a node in the tree to a character offset in the text.
  • Persistent.

By persistence I mean the ability to reuse most of the existing nodes in the tree when an edit is made to the text buffer. Since the nodes are immutable, there’s no barrier to reusing them, as I’ve discussed many times on this blog. We need this for performance; we cannot be re-parsing huge wodges of text every time you hit a key. We need to re-lex and re-parse only the portions of the tree that were affected by the edit because we are potentially re-doing this analysis between every keystroke.

Aside: Determining what those portions of the tree are is quite tricky; I might blog about that at a later date. An edit that, for example, adds async to a method can cause the parse of await(foo); in the method body to change from an invocation to a usage of the await contextual keyword.

When you try to put all five of those things into one data structure you immediately run into problems:

  • How do you build a tree node in the first place? The parent and the child both refer to each other, and are immutable, so which one gets built first?
  • Supposing you manage to solve that problem: how do you make it persistent? You cannot re-use a child node in a different parent because that would involve telling the child that it has a new parent. But the child is immutable.
  • Supposing you manage to solve that problem: when you insert a new character into the edit buffer, the absolute position of every node that is mapped to a position after that point changes. This makes it very difficult to make a persistent data structure, because any edit can change the spans of most of the nodes!

But on the Roslyn team we routinely do impossible things. We actually do the impossible by keeping two parse trees:

The “green” tree is immutable, persistent, has no parent references, is built “bottom-up”, and every node tracks its width but not its absolute position. When an edit happens we rebuild only the portions of the green tree that were affected by the edit, which is typically about O(log n) of the total parse nodes in the tree.

The “red” tree is an immutable façade that is built around the green tree; it is built “top-down” on demand and thrown away on every edit. It computes parent references by manufacturing them on demand as you descend through the tree from the top. It manufactures absolute positions by computing them from the widths, again, as you descend. You, the consumer of the Roslyn API, only ever see the red tree; the green tree is an implementation detail. (And if you use the debugger to peer into the internal state of a parse node you’ll in fact see that there is a reference to another parse node in there of a different type; that’s the green tree node.)

Incidentally, these are called “red/green trees” because those were the whiteboard marker colours we used to draw the data structure in the design meeting. There’s no other meaning to the colours.

The benefit of this strategy is that we get all those great things: immutability, persistence, parent references, and so on. The cost is that this system is complex and can consume a lot of memory if the “red” façades get large. We are at present doing experiments to see if we can reduce some of the costs without losing the benefits.

Next time on FAIC: What precisely is “implementation-defined behaviour”?

Commentary from 2020:

I did not come up with the idea for red-green trees but I was in the room where it happened and it was a lot of fun coming up with a new twist on an old data structure that would meet our needs while still being “functional style”. I have since used variations on this data structure a few times in different parsers.

There were a number of questions in the comments to the original post that I never answered, oddly enough. Sorry for the eight year delay:

  • How many children does each tree node have?

As many as it needs; a binary operator node has two children, a unary operator has one, and so on. I wrote an XML document that lists the children of every tree node type, and a script that automatically generates the tree node classes from that XML doc. I don’t know if this scheme is still used for the Roslyn project or not!

  • Is the red tree rebuilt on every edit or every traversal?

Every edit. The red tree is internally mutable but presents an immutable interface to its caller. When you access a child of a red tree node for the first time, a new red tree node is created for that child green node, and the new red child is then cached in its red parent.

  • Was GC pressure a consideration?

Good heavens yes. This scheme ends up creating up to twice as many small objects as a normal parse tree would, and therefore a lot of collection pressure. Moreover, there is a real danger that you end up interleaving small short-term allocations with small long-term allocations, which produces the worst case for data locality while maximizing the work the compacting collector must do. We measured the GC cost of this solution very carefully.

  • How bad a GC penalty is discarding the red tree on every edit?

Since the red tree is built on demand, the red tree is usually a very small fraction of the green tree; it consists of just the “spines” that get to whatever nodes were of interest to IntelliSense or whatever analyzers are running. If for some reason you are having to traverse every node in a red tree on an edit, then it is expensive to build that tree and then throw it away.