Lowering in language design, part two

Last time on FAIC I described how language designers and compiler writers use “lowering” to transform a high-level program written in a particular language into a lower-level program written using simpler features of the same language. Did you notice what this technique presupposes? To illustrate, let’s take a look at a line of the C# specification:

the operation e as T produces the same result as e is T ? (T)(e) : (T)null except that e is only evaluated once.

This seems bizarre; why does the specification say that a higher-level operation is exactly the same as a lower-level operation, except that it isn’t? Because there is no way to express the exact semantics of as in legal C# in any lowered form! The feature “evaluate a subexpression to produce its side effects and value once and then use that value several times throughout its containing expression” does not exist in C#, so the specification is forced to describe the lowering in this imprecise and roundabout way.

A nice principle of programming language design that C# does not adhere to is that all the higher-level features of the program can be expressed as lower-level features. Why is this nice to have? Well, not only is it a big convenience for writers of the specification, it also is a big convenience for writers of language analyzers. If you can programmatically transform a high-level language into an exactly equivalent program in a language with far fewer complicated concepts then your analyzer gets simpler. Continue reading

About these ads

Lowering in language design, part one

Programming language designers and users talk a lot about the “height” of language features; some languages are considered to be very “high level” and some are considered to be very “low level”. A “high level” language is generally speaking one which emphasizes the business concerns of the program, and a low-level language is one that emphasizes the mechanisms of the underlying hardware. As two extreme examples, here’s a program fragment in my favourite high-level language, Inform7:
Continue reading

Past, present, future

Thanks to everyone who came out to the on-the-web and in-person events last week; it was an exhausting week but I had a great time. I’ll post the links to the recorded versions of the talks once I have them.

Developer Tech was kind enough to ask me to write a few words about the past, present and future of C#; this article is mostly pitched at people unfamiliar with the history of the language. If you read this blog you probably know all this already, but if you want to check it out, the article is here: http://www.developer-tech.com/news/2014/jan/30/past-present-and-future-c/

Eric rambles on about C#, again

Rachel Roumeliotis, who amongst other things edits C# books for O’Reilly, recently did an interview with me where I ramble on about async/await, Roslyn, performance analysis as an engineering discipline, and some broad-strokes ideas for future language research areas. If you have sixteen minutes to burn, check it out! The O’Reilly Radar blog post is here, and the video has also been posted to YouTube here.

A couple things to mention here; first, I say in the video that we’ve shipped one preview release of Roslyn; in fact we have shipped two. The video was recorded before we had announced the new release. And second, I want to re-emphasize that the end bit where you get more of Eric’s musings about ideas for future language research areas are for your entertainment. We have not announced any product beyond Roslyn, and we are certainly making no promises whatsoever about the feature sets of unannounced, entirely hypothetical products. Enjoy!

Persistence, façades and Roslyn’s red-green trees

We decided early in the Roslyn design process that the primary data structure that developers would use when analyzing code via Roslyn is the syntax tree. And thus one of the hardest parts of the early Roslyn design was figuring out how we were going to implement syntax tree nodes, and what information they would proffer up to the user. We would like to have a data structure that has the following characteristics:

  • Immutable.
  • The form of a tree.
  • Cheap access to parent nodes from child nodes.
  • Possible to map from a node in the tree to a character offset in the text.
  • Persistent.

By persistence I mean the ability to reuse most of the existing nodes in the tree when an edit is made to the text buffer. Since the nodes are immutable, there’s no barrier to reusing them, as I’ve discussed many times on this blog. We need this for performance; we cannot be re-parsing huge wodges of text every time you hit a key. We need to re-lex and re-parse only the portions of the tree that were affected by the edit[1. Determining what those portions of the tree are is quite tricky; I might blog about that at a later date. An edit that, for example, adds async to a method can cause the parse of await(foo); in the method body to change from an invocation to a usage of the await contextual keyword.], because we are potentially re-doing this analysis between every keystroke.

When you try to put all five of those things into one data structure you immediately run into problems:

  • How do you build a tree node in the first place? The parent and the child both refer to each other, and are immutable, so which one gets built first?
  • Supposing you manage to solve that problem: how do you make it persistent? You cannot re-use a child node in a different parent because that would involve telling the child that it has a new parent. But the child is immutable.
  • Supposing you manage to solve that problem: when you insert a new character into the edit buffer, the absolute position of every node that is mapped to a position after that point changes. This makes it very difficult to make a persistent data structure, because any edit can change the spans of most of the nodes!

But on the Roslyn team we routinely do impossible things. We actually do the impossible by keeping two parse trees. The “green” tree is immutable, persistent, has no parent references, is built “bottom-up”, and every node tracks its width but not its absolute position. When an edit happens we rebuild only the portions of the green tree that were affected by the edit, which is typically about O(log n) of the total parse nodes in the tree.

The “red” tree is an immutable façade that is built around the green tree; it is built “top-down” on demand and thrown away on every edit. It computes parent references by manufacturing them on demand as you descend through the tree from the top. It manufactures absolute positions by computing them from the widths, again, as you descend.

You, the consumer of the Roslyn API, only ever see the red tree; the green tree is an implementation detail. (And if you use the debugger to peer into the internal state of a parse node you’ll in fact see that there is a reference to another parse node in there of a different type; that’s the green tree node.)

Incidentally, these are called “red/green trees” because those were the whiteboard marker colours we used to draw the data structure in the design meeting. There’s no other meaning to the colours.

The benefit of this strategy is that we get all those great things: immutability, persistence, parent references, and so on. The cost is that this system is complex and can consume a lot of memory if the “red” façades get large. We are at present doing experiments to see if we can reduce some of the costs without losing the benefits.


Next time on FAIC: What precisely is “implementation-defined behaviour”?