The first algorithm I ever worked on in the C# compiler was the optimizer that handles string concatenations.[1. Unfortunately I did not manage to port these optimizations to the Roslyn codebase before I left; hopefully someone will get to that!] It’s all pretty straightforward, but there are a few clever bits that I thought I might discuss today.

# Category Archives: Code generation

# Nullable micro-optimizations, part eight

Today, the answer to the puzzle I posed last week. The answer is no, that optimization is not legal in general, though it is legal in a few specific cases.

As I have discussed several times before, C# has very strict rules about what order operations happen in. A quick rule of thumb is: **operands run left-to-right**. In our example we had `X() * Y() + Z()`

, so let’s see what conclusions we reach by applying this rule:

`X()`

is an operand of the`*`

to the left of`Y()`

, so the call to`X()`

must happen before the call to`Y()`

.`X()*Y()`

is an operand of the`+`

to the left of`Z()`

, so the multiplication must happen before the call to`Z()`

.

The compiler must generate code that computes `X()`

, then `Y()`

, then the multiplication, then `Z()`

, and then the addition. But our proposed optimization was

int? r; int? tempX = X(); int? tempY = Y(); int tempZ = Z(); r = tempX.HasValue & tempY.HasValue ? new int?(tempX.GetValueOrDefault() * tempY.GetValueOrDefault() + tempZ) : new int?();

which computes `Z()`

*before* the multiplication.

So what’s the big deal? **Multiplication of integers can throw an exception in a checked context**. That exception should prevent `Z()`

from being called in the first place should `X() * Y()`

throw on the multiplication. **This optimization is only valid in an unchecked context.**

And of course it just gets worse if we start to consider lifted arithmetic on types other than `int?`

. It’s a bad practice to write a user-defined operator with a side effect, but it is legal and the C# compiler must ensure that side effects of user-defined operators are observed to happen in the prescribed order.

Rather than try to figure out all the types for which this optimization is legal in various contexts, Roslyn makes no attempt to optimize this kind of binary operator. It generates a temporary `int?`

for the multiplication, and then does the regular lifted arithmetic for the addition. Another lovely optimization spoiled by conflict with an lovely invariant.

But wait!

The problem is that the side effects of the multiplication must be observed to come before the side effects of the right addend. What if the right addend has no side effects? Then we can do this optimization!

The most common situation in which the right addend has no side effects is when it is a constant:

int? r = X() * Y() + 1;

This *can* legally be optimized to

int? r; int? tempX = X(); int? tempY = Y(); r = tempX.HasValue & tempY.HasValue ? new int?(tempX.GetValueOrDefault() * tempY.GetValueOrDefault() + 1) : new int?();

And in fact I did add this optimization to Roslyn; just as unary operations and conversions can be “distributed”, so can binary operations where the right operand is a constant. Moreover, as a pleasant side effect, doing so allowed for an easy implementation of various optimizations that improve the quality of lifted `x += 1`

and `x++`

expressions.

Well, that took rather more episodes than I thought it would when I started! I could talk a bit more about how Roslyn optimizes other more exotic nullable arithmetic, like equality, inequality, and logical operations. I could also talk a bit about the stuff I didn’t get to implement before I left; Roslyn does a slightly worse job than the original recipe compiler when optimizing expressions where we know that the result is going to be null, but also must compute a side effect. But rather than go down those lengthy rabbit holes, I think I’ll call this series done at eight episodes.

**Next time on FAIC:** Some of these optimizations we’ve been talking about are *elisions*; I’ll talk a bit about how computer programmers have borrowed this term from linguistics, and how we use it in two different senses.

# Nullable micro-optimizations, part seven

Today, a puzzle for you.

We’ve been talking about how the Roslyn C# compiler aggressively optimizes nested lifted unary operators and conversions by using a clever technique. The compiler realizes the inner operation as a conditional expression with a non-null nullable value on the consequence branch and a null nullable value on the alternative branch, distributes the outer operation to each branch, and then optimizes the branches independently. That then gives a conditional expression that can itself be the target of further optimizations if the nesting is deeper.

This works great for lifted conversions and unary operators. Does it also work for binary operators? It seems like it would be a lot harder to make this optimization work for a lifted binary operator where *both *operands are themselves lifted operations. But what if just *one* of the operands was a lifted operation, and the other operand was guaranteed to be non-null? There might be an opportunity to optimize such an expression. Let’s try it. Suppose `X()`

and `Y()`

are expressions of type `int?`

and that `Z()`

is an expression of type `int`

:

int? r = X() * Y() + Z();

We know from our previous episodes that operator overload resolution is going to choose lifted multiplication for the inner subexpression, and lifted addition for the outer subexpression. We know that the right operand of the lifted addition will be treated as though it was `new int?(Z())`

, but we can optimize away the unnecessary conversion to `int?`

. So the question is **can the C# compiler legally code-generate that as though the user had written:**

int? r; int? tempX = X(); int? tempY = Y(); int tempZ = Z(); r = tempX.HasValue & tempY.HasValue ? new int?(tempX.GetValueOrDefault() * tempY.GetValueOrDefault() + tempZ) : new int?();

If you think the answer is “yes” then the follow-up question is: **can the C# compiler legally make such an optimization for all nullable value types that have lifted addition and multiplication operators?**

If you think the answer is “no” then the follow-up questions are: **why not?** and **is there any scenario where this sort of optimization is valid?**

**Next time on FAIC** we’ll be kind to our fine feathered friends; after that, we’ll find out the answer to today’s question.

Eric is crazy busy at Coverity’s head office; this posting was pre-recorded.

# Nullable micro-optimizations, part six

Previously in this series I said that the fact that the original C# compiler pursues a less aggressive strategy for optimizing away temporaries and branches from nested lifted conversions and unary operators because it suffers from “premature optimization”. That’s a loaded term and I’m not using it in the standard sense, so I want to clarify that a bit.

Donald Knuth, author of the classic four-volume series The Art of Computer Programming, famously said “*premature optimization is the root of all evil.*“. I think however that it is more instructive to read that quotation with more context:

We should forget about small efficiencies, say about 97% of the time: premature optimization is the root of all evil. Yet we should not pass up our opportunities in that critical 3%. A good programmer will not be lulled into complacency by such reasoning, he will be wise to look carefully at the critical code; but only after that code has been identified.

Which is of course what I have echoed in my numerous performance rants over the years: don’t waste your valuable time making risky changes to the 97% of the code that isn’t the slowest thing and that no customer will ever notice. Use a profiler, find the slowest 3%, and spend your optimization budget on that.

That is good advice, but when I say that a compiler suffers from “premature optimization”, that’s not at all what I mean. Rather, I mean that *the compiler performs an optimization pass too early in the compilation process*.

Back in 2010 I described in considerable detail the various stages, or “passes”, that the original recipe C# compiler performs when going from raw text to IL, so you might want to read that. For the purposes of this discussion we can simplify that all down to four stages:

- Lexical and grammatical analysis
- Initial “binding” — that is, semantic analysis
- “Lowering” — that is, rewriting high-level code into low-level code — and additional error detection
- Code generation

You would expect that semantic optimizations[1. I described many of the optimizations that the C# compiler performs back in 2009. ] such as lifted arithmetic lowering would happen in the third stage, [2. Some optimizations of course happen during the fourth phase, because the code generator itself can identify branches and temporaries that can be eliminated.] and for the most part they do.[3. For some examples of how premature optimizations in the initial binding pass led to bugs and breaking changes, see my posts from 2006 on that subject. Part one. Part two.] The implementation decision which is vexing me today is that the original recipe C# compiler’s strategy is that the initial binding pass identifies portions of lifted arithmetic expressions that can be optimized later, and flags them as needing attention during the lowering pass.[4. I am over-simplifying here; it is not as simple as a Boolean flag in most cases. In fact, the amount of information that is stored by the initial binding pass for the use of the optimizer later is quite scary because it is easy to accidentally use the wrong bits when lowering. An example of such a bug is in this StackOverflow question.]

The problem is that the initial binding pass only identifies opportunities for optimization based on the *original* form of the code. If the optimization pass produces “lowered” code that *is itself amenable to further optimization* then it is never optimized because there’s no flag left in there by the initial binding pass!

To make a long story short — and yes, this seems to have gotten rather long, sorry — the practical upshot is that the original recipe compiler is very good at finding “shallow” optimization opportunities on lifted operations, but very bad at making optimizations compose nicely when lifted operations are deeply nested; those tend to generate lots of unnecessary temporaries and branches.

Like I said previously, the compiler is not *required* to make those optimizations, but it has always vexed me that it does not. Roslyn improves on this situation by deferring all lowering and optimization of lifted arithmetic to the lowering phase; only the bare minimum analysis is performed during the initial binding pass. Roslyn optimizes each lifted arithmetic expression as it lowers it to temporaries and conditionals, and then tries aggressively to “distribute” lifted unary operations and conversions into those generated conditionals in order to skip creation of unnecessary temporaries and branches.

**Next time on FAIC:**Is it

*ever*possible to apply this optimization technique to a lifted

*binary*operator? I’ll pose that question in the form of a puzzle.

Eric is crazy busy at Coverity’s head office; this posting was pre-recorded.

# Nullable micro-optimization, part five

Last time in this series I described how a lifted implicit conversion could be “distributed” to both branches of the conditional operator that is the realization of a lifted arithmetic expression, and then optimized further on each side. Of course, the thing being converted can be *any* lifted expression in order to take advantage of this optimization. This means that the optimization “composes” nicely; the optimization could be repeatedly applied when lifted operations are nested.

This is a bit of a silly illustrative example: suppose you have expressions `x`

and `y`

of type `A?`

with a lifted addition operator that produces an `A?`

. There’s also a lifted conversion from `A?`

to `B?`

, and similarly from `B?`

to `C?`

.

C? c = (C?)(B?)(x + y);

As we discussed previously in this series, the compiler realizes the lifted addition as a conditional expression. We know that the lifted conversion to `B?`

can be “distributed” to the consequence and alternative branches of the conditional expression. That then results in a *different *conditional expression, but one such that the conversion to `C?`

can be distributed to each branch of that! That is, the compiler could realize the code above as:

C? c; A? temp1 = x; A? temp2 = y; c = temp1.HasValue & temp2.HasValue ? new C?((C)(B)(temp1.GetValueOrDefault() + temp2.GetValueOrDefault()) : new C?();

… by applying the optimization twice, rather than creating a temporary of type `A?`

for the sum and a temporary of type `B?`

for the conversion of the sum, each with its own conditional expression. The aim of the optimization is to reduce the number of temporaries and conditional expressions, and thereby make the code smaller and produce fewer basic blocks.

A lifted conversion is rather like a lifted unary operator, and in fact the compiler could do the analogous optimization for the lifted unary `+`

, `-`

, `~`

and `!`

operators. Continuing our silly example, suppose we have a lifted `~`

operator on `A?`

that produces an `A?`

. If you said:

C? c = (C?)(B?)~(x + y);

Then the `~`

operation can also be “distributed” to each branch of the conditional just as the conversions can be. The insight here is the same as before: if the consequence and alternative are both of the same type then

~(condition ? consequence : alternative)

is the same as

condition ? ~consequence : ~alternative

When we furthermore know that the consequence is of the form `new A?(something)`

then we know that `~consequence`

is the same as `new A?(~something)`

. When we know that the alternative is of the form `new A?()`

, then we know that `~new A?()`

is going to be a no-op, and just produce `new A?()`

again. So, to make a long story short, the C# compiler can codegen the code above as:

C? c; A? temp1 = x; A? temp2 = y; c = temp1.HasValue & temp2.HasValue ? new C?((C)(B)(~(temp1.GetValueOrDefault() + temp2.GetValueOrDefault())) : new C?();

Again, we save several temporaries and branches by performing this optimization.

Now, I’ve been saying “the compiler *could*” a lot because of course a compiler is not *required *to perform these optimizations, and in fact, the “original recipe” compiler is not very aggressive about performing these optimizations. I examined the original recipe compiler very closely when implementing nullable arithmetic in Roslyn, and discovered that it suffers from a case of “premature optimization”.

**Next time on FAIC:** We’ll digress for the next couple of posts. Then I’ll pick up this subject again with a discussion of the evils of “premature optimization” of nullable arithmetic, and how I’m using that loaded term in a subtly different way than Knuth did.

# Nullable micro-optimization, part four

Last time on FAIC I described how the C# compiler elides the conversion from `int`

to `int?`

when you add an `int?`

to an `int`

, and thereby manages to save unnecessary calls to `HasValue`

and `GetValueOrDefault()`

. Today I want to talk a bit about another kind of nullable conversion that the compiler can optimize. Consider the following, in which `w`

is an expression of type `int?`

:

double? z = w;

There is an implicit conversion from `int`

to `double`

, and so there is a “lifted” conversion from `int?`

to `double?`

. As I’m sure you’d expect, given the previous entries in this series, this would be code-generated the same as:

double? z; int? temp = w; z = temp.HasValue ? new double?((double)temp.GetValueOrDefault()) : new double?();

If you don’t know anything more about `w`

then that’s about as good as it gets. But suppose we did know more. For example, suppose we have:

double? z = new int?();

That might seem crazy, but bear with me. In this case, obviously the compiler need not ever call `HasValue`

in the first place because you and I both know it is going to be false. And we know that there are no side effects of the expression that need to be preserved, so the compiler can simply generate:

double? z = new double?();

Similarly, suppose we have an expression `q`

of type `int`

, and the assignment:

double? z = new int?(q);

Again, clearly we do not need to go through the rigamarole of making a temporary and checking to see if its `HasValue`

property is true. We can skip straight to:

double? z = new double?((double)q);

So this is all well and good. The Roslyn and “original recipe” C# compilers both perform these optimizations. But now let’s think about a trickier case. Suppose we have expressions `x`

and `y`

both of type `int?`

, and suppose for the sake of argument that we do not know anything more about the operands:

double? z = x + y;

Now, reason like the compiler. We do not know whether `x`

and `y`

have values or not, so we need to use the un-optimized version of addition. So this is the same as:

double? z; int? temp1 = x; int? temp2 = y; int? sum = temp1.HasValue & temp2.HasValue ? new int?(temp1.GetValueOrDefault() + temp2.GetValueOrDefault()) : new int?(); z = (double?)sum;

We don’t know whether `sum`

has a value or not, so we must then generate the full lifted conversion, right? So this is then generated as:

double? z; int? temp1 = x; int? temp2 = y; int? sum = temp1.HasValue & temp2.HasValue ? new int?(temp1.GetValueOrDefault() + temp2.GetValueOrDefault()) : new int?(); z = sum.HasValue ? new double?((double)sum.GetValueOrDefault()) : new double?()

Is that the best we can do? No! The key insight here is that *the conversion can be distributed into the consequence and alternative of the conditional*, and that doing so enables more optimizations. That is to say that:

z = (double?) (temp1.HasValue & temp2.HasValue ? new int?(temp1.GetValueOrDefault()+ temp2.GetValueOrDefault()) : new int?());

Gives the exact same result as:

z = temp1.HasValue & temp2.HasValue ? (double?) new int?(temp1.GetValueOrDefault()+ temp2.GetValueOrDefault()) : (double?) new int?();

But we already know how to optimize those! I said above that only crazy people would convert `new int?()`

to `double?`

, and of course you would not do that in your user-written code. But when the *compiler itself* generates that code during an optimization, it can optimize it further. The compiler generates a lifted conversion from a lifted arithmetic expression by distributing the conversion into both branches of the conditional, and then optimizes each branch. Therefore, `double? z = x + y;`

is actually generated as:

double? z; int? temp1 = x; int? temp2 = y; z = temp1.HasValue & temp2.HasValue ? new double?((double)(temp1.GetValueOrDefault() + temp2.GetValueOrDefault())) : new double?();

The compiler does not need to generate the `sum`

variable at all, and it certainly does not need to check to see if it has a value. This optimization eliminates one entire temporary and the entire second conditional expression.

**Next time on FAIC:**We’ll digress for some brief news on the publishing front. We’ll then continue this series and ask: are there other “chained” lifted operations that can be optimized?

# Nullable micro-optimization, part three

Happy New Year all; I hope you had as pleasant a New Year’s Eve as I did.

Last time on FAIC I described how the C# compiler first uses overload resolution to find the unique best lifted operator, and then uses a small optimization to safely replace a call to `Value`

with a call to `GetValueOrDefault()`

. The jitter can then generate code that is both smaller and faster. But that’s not the only optimization the compiler can perform, not by far. To illustrate, let’s take a look at the code you might generate for a binary operator, say, the addition of two expressions of type `int?`

, `x`

and `y`

:

int? z = x + y;

Last time we only talked about unary operators, but binary operators are a straightforward extension. We have to make two temporaries, so as to ensure that side effects are executed only once: [1. More specifically, the compiler must ensure that side effects are executed *exactly once*.]

int? z; int? temp1 = x; int? temp2 = y; z = temp1.HasValue & temp2.HasValue ? new int?(temp1.GetValueOrDefault() + temp2.GetValueOrDefault()) : new int?();

A brief aside: shouldn’t that be `temp1.HasValue && temp2.HasValue`

? Both versions give the same result; is the short circuiting one more efficient? Not necessarily! AND-ing together two bools is extremely fast, possibly faster than doing an extra conditional branch to avoid what is going to be an extremely fast property lookup. And the code is certainly smaller. Roslyn uses non-short-circuiting AND, and I seem to recall that the earlier compilers do as well.

Anyway, when you do a lifted addition of two nullable integers, that’s the code that the compiler generates when it knows nothing about either operand. Suppose however that you added an expression `q`

of type `int?`

and an expression `r`

of type `int`

[2. Roslyn will also optimize lifted binary operator expressions where both sides are known to be null, where one side is known to be null, and where both sides are known to be non-null. Since these scenarios are rare in user-written code, I'm not going to discuss them much.]:

int? s = q + r;

OK, reason like the compiler here. First off, the compiler has to determine what the addition operator means, so it uses overload resolution and discovers that the unique best applicable operator is the lifted integer addition operator. Therefore both operands have to be converted to the operand type expected by the lifted operator, `int?`

. So immediately we have determined that this means:

int? s = q + (int?)r;

Which of course is equivalent to

int? s = q + new int?(r);

And now we have an addition of two nullable integers. We already know how to do that, so the compiler generates:

int? s; int? temp1 = q; int? temp2 = new int?(r); s = temp1.HasValue & temp2.HasValue ? new int?(temp1.GetValueOrDefault() + temp2.GetValueOrDefault()) : new int?();

And of course you are saying to yourself *well that’s stupid*. You and I both know that `temp2.HasValue`

is always going to be true, and that `temp2.GetValueOrDefault()`

is always going to be whatever value `r`

had when the temporary was built. The compiler can optimize this to:

int? s; int? temp1 = q; int temp2 = r; s = temp1.HasValue ? new int?(temp1.GetValueOrDefault() + temp2) : new int?();

Just because the conversion from `int`

to `int?`

is *required* by the language specification does not mean that the compiler actually has to generate code that does it; rather, all the compiler has to do is generate code that produces the correct results! [3. A fun fact is that the Roslyn compiler's nullable arithmetic optimizer actually optimizes it to `temp1.HasValue & true ? ...`

, and then Roslyn's regular Boolean arithmetic optimizer gets rid of the unnecessary operator. It was easier to write the code that way than to be super clever in the nullable optimizer.]

**Next time on FAIC:** What happens when we throw some lifted conversions into the mix?