Last time on FAIC I described how the C# compiler elides the conversion from
int? when you add an
int? to an
int, and thereby manages to save unnecessary calls to
GetValueOrDefault(). Today I want to talk a bit about another kind of nullable conversion that the compiler can optimize. Consider the following, in which
w is an expression of type
double? z = w;
There is an implicit conversion from
double, and so there is a “lifted” conversion from
double?. As I’m sure you’d expect, given the previous entries in this series, this would be code-generated the same as:
double? z; int? temp = w; z = temp.HasValue ? new double?((double)temp.GetValueOrDefault()) : new double?();
If you don’t know anything more about
w then that’s about as good as it gets. But suppose we did know more. For example, suppose we have:
double? z = new int?();
That might seem crazy, but bear with me. In this case, obviously the compiler need not ever call
HasValue in the first place because you and I both know it is going to be false. And we know that there are no side effects of the expression that need to be preserved, so the compiler can simply generate:
double? z = new double?();
Similarly, suppose we have an expression
q of type
int, and the assignment:
double? z = new int?(q);
Again, clearly we do not need to go through the rigamarole of making a temporary and checking to see if its
HasValue property is true. We can skip straight to:
double? z = new double?((double)q);
So this is all well and good. The Roslyn and “original recipe” C# compilers both perform these optimizations. But now let’s think about a trickier case. Suppose we have expressions
y both of type
int?, and suppose for the sake of argument that we do not know anything more about the operands:
double? z = x + y;
Now, reason like the compiler. We do not know whether
y have values or not, so we need to use the un-optimized version of addition. So this is the same as:
double? z; int? temp1 = x; int? temp2 = y; int? sum = temp1.HasValue & temp2.HasValue ? new int?(temp1.GetValueOrDefault() + temp2.GetValueOrDefault()) : new int?(); z = (double?)sum;
We don’t know whether
sum has a value or not, so we must then generate the full lifted conversion, right? So this is then generated as:
double? z; int? temp1 = x; int? temp2 = y; int? sum = temp1.HasValue & temp2.HasValue ? new int?(temp1.GetValueOrDefault() + temp2.GetValueOrDefault()) : new int?(); z = sum.HasValue ? new double?((double)sum.GetValueOrDefault()) : new double?()
Is that the best we can do? No! The key insight here is that the conversion can be distributed into the consequence and alternative of the conditional, and that doing so enables more optimizations. That is to say that:
z = (double?) (temp1.HasValue & temp2.HasValue ? new int?(temp1.GetValueOrDefault()+ temp2.GetValueOrDefault()) : new int?());
Gives the exact same result as:
z = temp1.HasValue & temp2.HasValue ? (double?) new int?(temp1.GetValueOrDefault()+ temp2.GetValueOrDefault()) : (double?) new int?();
But we already know how to optimize those! I said above that only crazy people would convert
new int?() to
double?, and of course you would not do that in your user-written code. But when the compiler itself generates that code during an optimization, it can optimize it further. The compiler generates a lifted conversion from a lifted arithmetic expression by distributing the conversion into both branches of the conditional, and then optimizes each branch. Therefore,
double? z = x + y; is actually generated as:
double? z; int? temp1 = x; int? temp2 = y; z = temp1.HasValue & temp2.HasValue ? new double?((double)(temp1.GetValueOrDefault() + temp2.GetValueOrDefault())) : new double?();
The compiler does not need to generate the
sum variable at all, and it certainly does not need to check to see if it has a value. This optimization eliminates one entire temporary and the entire second conditional expression.
Next time on FAIC: We’ll digress for some brief news on the publishing front. We’ll then continue this series and ask: are there other “chained” lifted operations that can be optimized?
Man this must be fast!
Why do you call GetValueOrDefault()?
If you checked that it has a value (HasValue), why not just use the Value property?
You should read part one, two, and three of this series of articles (Nullable micro-optimizations).
The gist of it is that Value performs its own redundant call to HasValue to decide if it needs to throw an exception. Since the compiler has already checked it, they just call GetValueOrDefault() which does not have any checks. So GetValueOrDefault ends up being faster oddly enough. Eric covered this in part 1 : https://ericlippert.com/2012/12/20/nullable-micro-optimizations-part-one/
Thanks. I admit I am a casual reader of FAIC. I had not read those. My bad (I am going back to the first one now…)
Amazing. I’d like to read more about all kinds of optimizations done by the C# compiler.
Then you should read this article!
Speaking of optimizations, it’s known (http://channel9.msdn.com/Forums/Coffeehouse/MS-working-on-a-same-compiler-for-C-AND-C–Not-in-incubation-but-for-production-) that Microsoft is working using the C++ optimization module for C#. Clearly Roslyn is more (how much more?) than just exposing the compiler’s AST.
Eric, I know you must have constraints about what you can say about Microsoft futures, but since Microsoft is talking about this themselves (via job postings), anything you can say would be interesting.
Pingback: Nullable micro-optimization, part three | Fabulous adventures in coding