Nullable micro-optimization, part two

I hope you’ve all had a pleasant Christmas; I sure did, though once again I was unable to return to Waterloo region to visit my family. Hopefully I’ll make it for Easter this coming year.

Last time on FAIC I described why calling GetValueOrDefault() instead of Value allows the jitter to generate smaller, faster code. Of course this optimization is first, tiny, and second, only a valid optimization in the case where you are certain that the nullable value is not actually null. Over the next few episodes I’ll describe how the C# compiler uses that fact to generate better code for you, but in order to do that, I first need to talk a bit about lifted arithmetic.

Back in 2007 I described what mathematicians mean by “lifted arithmetic”, and how the C# specification uses this term in a subtly wrong way. It’s been a long time, so here’s a quick refresher. Mathematically, by “lifted” we mean that if there is a function f : S → S, and we make a new set S' = S ∪ { null }, then the lifted function f' : S' → S' is defined as f'(null) → null, f'(s ∈ S) → f(s). Or, in English, the lifted function gives null when given null, and agrees with the unlifted function otherwise.

We then extend the definition of “lifted” to functions of the form f : S → T in the obvious manner: the lifted function is f' : S' → T'. Similarly for functions of two, three or more parameters: the lifted function is null if any argument is null, and agrees with the unlifted function otherwise.

Lifted arithmetic operators in C# work similarly. In C#, if there is an operator, let’s say the unary ~ operator that takes an int and produces an int, then there is also a lifted ~ operator that takes an int? and produces an int?. The lifted operator produces null if given null, and otherwise agrees with the unlifted operator.

Some so-called “lifted” operators do not follow this pattern, but for the purposes of this series we’ll mostly be talking about the ones that do.


I want to make a brief aside here to discuss how the C# compiler knows to use a lifted operator in the first place. The answer is straightforward: it uses overload resolution.

Continuing our example, when you say ~x, the compiler pretends that you did a method call operator~(x) and creates a candidate set that consists of “methods” corresponding to the signatures of the user-defined and built-in ~ operators. If overload resolution produces a unique best applicable operator then it is chosen and the operand is implicitly converted to the “parameter type” of the chosen “operator method”, otherwise the compiler produces an error. That’s an oversimplification; consult the specification for the exact details.

Unfortunately, the specification sections on operator overload resolution are not strictly speaking entirely accurate: there are some known discrepancies between the compiler and the specification. In some of these cases the compiler is wrong and in some the specification is wrong. The areas with small discrepancies include (1) precisely when a user-defined operator is considered to be “liftable” and what the resulting semantics are, (2) how the candidate set for operators on enumerated and delegate types are determined, and (3) how the “betterness” rules treat lifted operators.

Mads and I have a number of times attempted to come up with better spec language but I don’t think the proposed changes made it into the latest revision. I might choose to do blog articles on these interesting and difficult corner cases in the future.

The important fact that will come into play later in this series is that if overload resolution chooses a lifted operator then the operand is implicitly converted to the nullable type. Just like how when normal overload resolution chooses a method, the arguments are implicitly converted to the corresponding formal parameter types.


Returning now to the subject at hand: how does the C# compiler generate code for a lifted operator? When you say:

int? y = ~x;

what happens? Let’s suppose that x is a legal expression of type int?, just to keep it easy. Overload resolution determines that the lifted ~ operator that takes an int? and produces an int? is the unique best applicable operator. The expression is already of the correct type. Now, you might naively think that the compiler would pretend that you’d typed:

int? y = x.HasValue ? ~x.Value : null;

but of course that code is wrong in two ways.

First, it doesn’t compile because the type of the conditional operator expression cannot be determined.

Astonishingly, I’ve never written a blog article about this specific aspect of the conditional operator, though it has certainly come up on StackOverflow a lot. This is probably the blog article that came the closest to describing this common problem.

And second, what if the expression x has a side effect? We would not want to generate

int? y = ~M(++i);

as:

int? y = M(++i).HasValue ? ~M(++i).Value : null;

because then the variable gets incremented twice and the method gets called twice if the result of the first call is not null. And of course the value returned the second time might be different! We can fix these two problems easily enough:

int? y;
int? temp = x;
y = temp.HasValue ? new int?(~temp.Value) : new int?();

And now we’re good.

At this point the C# compiler can say “but wait a moment! if we are on the “consequence” branch of the conditional operator then we know for sure that temp.HasValue is true. Therefore the compiler can generate the more optimal code:

int? y;
int? temp = x;
y = temp.HasValue ? new int?(~temp.GetValueOrDefault()) : new int?();

Which is in fact what both the “original recipe” and the “extra crispy Roslyn” compilers do. The savings is tiny, but it is real, and these savings add up as the expressions get more and more complicated, as we’ll see.


Next time on FAIC: Is that the only optimization a C# compiler can perform when generating code for lifted arithmetic? Of course not! In the next few episodes we’ll look at some ways the compiler can be more clever, and compare the Roslyn compiler’s heuristics to the previous compiler’s heuristics. Happy New Year all, and we’ll see you in 2013 for more fabulous adventures.

17 thoughts on “Nullable micro-optimization, part two

  1. Lets continue optimizing the generated code:
    1) If null the value does not change, so no need to call the constructor:
    int? y;
    int? temp = x;
    y = temp.HasValue ? new int?(~temp.GetValueOrDefault()) : temp;
    2) In fact, no need for the temp variable and to have noth sides of the branch:
    int? y = x;
    if (y.HasValue)
    y = new int?(~y.GetValueOrDefault());
    3) Can the compiler access privade fields? If so, it can just change the value of the private field:
    int? y = x;
    if (y.HasValue)
    y.value = ~y.GetValueOrDefault();
    4) If, internally, bool have a single value for true and false and the operator in quesion is guaranteened to have no side effect then we can eliminate the branch, for example, with false = 0 and true = -1:
    int? y = x;
    y.value = (int)y.HasValue & ~y.GetValueOrDefault();
    5) A special case for the default “-” (negative) operator:
    int? y = x;
    y.value = -y.GetValueOrDefault();

    • Interesting ideas. However, many of those optimizations would involve generating code that the verifier would reject. Also, don’t forget that the expression x might use local variable y; it would be very strange to have “int? y = ~M(out y);” but it is legal, and the compiler has to generate correct code in all those cases.

  2. There is of course another case where it is valid to use GetValueOrDefault even if you’re not sure whether the value is null, and that’s when you legitimately want to substitute the default value for a null value. For example, we have a requirement to subtract two nullable doubles, returning null if the first is null, but returning the first if the second is null. We can use x – (y ?? 0) or x – y.GetValueOrDefault().

  3. Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1262

  4. Nice article, as always; this is the best (on most times the only) place to get these kind of insights on C#!

    Yesterday (the 28th), I was hoping to read about a new C# keyword that you had baked into Roslyn before leaving, but I guess that you had to skip one year to deceive us a little, haha.

  5. Hi Eric! Great post, as always.
    A quick unrelated question, though: I find the VS2012 Code Analysis a very complete and useful tool. Isn’t that making Microsoft a direct competitor to Coverity? I mean, I’m sure that Coverity’s tool is currently much broader and complete than the VS2012 built-in Code Analysis, but we know that even with simpler but free versions of the same tool is the way that MS has surpassed many competitors before. Did you ever thought about this particular possibility (of being a MS competitor)?

    • Good question.

      Developer Division at Microsoft is a large, profitable business, yes, but that just pays the salaries and keeps the lights on. The strategic importance of Developer Division is not that it makes a profit, it’s that it makes the platforms that drive most of the bottom line more attractive to developers, and thereby more attractive to consumers. Therefore the last thing they want to do is pursue a strategy of attempting to defeat third-party products in the marketplace that encourage developers to develop for the platform. Companies like Coverity that develop tools that make the platform more attractive are partners, not competitors.

      • I can say the same thing too. I’ve seen over the last 7 years how Microsoft has incrementally added ReSharper features to Visual Studio but not too fast and too much so that it would kill it, but rather let it live.

        The VS team made sure to be at least one step behind ReSharper.

  6. Hi Eric,

    Thank you for the article.

    Still not clear what benefits of GetValueOrDefault() against .Value property? IL says, first one doesn’t throw exception, .Value does.

    Is it the reason?

  7. I wonder if the C# compiler attempts to detect whether or not expression x has any side effect that would cause (x.HasValue) to change. I.e., given

    int? y1 = ~x;
    int? y2 = ~x;

    would be converted to:
    int? y1 = ~x;
    int? y2 = ~x;

    #if x changes x.HasValue then
    int? temp1 = x
    if (temp1.HasValue)
    {
    y1 = temp.GetValueOrDefault();
    int? temp2 = x;
    y2 = temp2.GetValueOrDefault();
    }
    else
    {
    y1 = null;
    y2 = null;
    }
    #else
    int? temp1, temp2;
    temp1 = x;
    y1 = temp1.HasValue ? temp1: int?();
    temp2 = x;
    y2 = temp2.HasValue ? temp2: int?();
    #end if

  8. If the structure type is moderately expensive (e.g. `Decimal` is 16 bytes), how many times will it end up getting copied? Conceptually, if one had a struct MaybeValid<T> with a public fields `Value` and `IsValid`, with the defined semantics that code shouldn’t rely upon `Value` holding `default(T)` when `IsValid` is false, `x=y+z;` could be `x.IsValid = y.IsValid && z.IsValid; x.Value = y.Value + z.Value;`, without any redundant copying being introduced by the use of the MaybeValid type in the situation where both items are valid; it would seem like it should be possible at the CIL level to make decision of whether to execute the “+” operator code after the operands have been stacked, even though C# doesn’t allow a good way to express that notion, and I don’t know what the Verifier will accept. If the compiler can’t access the fields of `Nullable` directly, it would seem one would have to end up with a few redundant copy operations no matter what optimizations one might try to perform, but I don’t know to what extent a compiler can e.g. direct a function which is going to returns a large struct, to store the return value directly to where it’s needed. That certainly seems to be the behavior of “structVar = new StructType(params);” in many cases.

  9. Pingback: Nullable micro-optimization, part three | Fabulous Adventures In Coding

  10. Is
    y = temp.HasValue ? new int?(~temp.Value) : new int?();
    better (more optimized) than
    y = temp.HasValue ? (int?)~temp.Value : null;
    and if so why?

  11. Hi, is the VS 2012 C# compiler supposed to employ the optimization? I tried following stupid code (Release Mode, Not attached, Any CPU, NOT 32 bit prefered, .net 4.5 target):

    public static int Unwrap( int? x ) {
    return x.HasValue ? x.Value : default(int);
    }

    The generated assembly contains the calls as-is. I even tried attaching the NoInlining flag to the method and looked into it using WinDbg. IL looks the same and the assembly version still contains a call to get_Value().

  12. Pingback: Nullable micro-optimizations, part one | Fabulous adventures in coding

Leave a comment