The no-lock deadlock

People sometimes ask me if there is a cheap-and-easy way to guarantee thread safety. For example, “if my method only reads and writes local variables and parameters, can I guarantee that my method is threadsafe?” Questions like that are dangerous because they are predicated on an incorrect assumption: that if every method of a program is “threadsafe”, whatever that means, then the entire program is “threadsafe”. I might not be entirely clear on what “threadsafe” means, but I do know one thing about it: thread safety is a property of entire programs, not of individual methods.

To illustrate why these sorts of questions are non-starters, today I present to you the world’s simplest deadlocking C# program:[1. Thanks to my erstwhile colleague Neal Gafter for this example.]

class C
{
  static C() 
  {
    // Let's run the initialization on another thread!
    var thread = new System.Threading.Thread(Initialize);
    thread.Start();
    thread.Join();
  }
  static void Initialize() { }
  static void Main() { }
}

At first glance clearly ever method of this incredibly simple program is “threadsafe”. There is only a single variable anywhere in the program; it is local, is written once, is written before it is read, is read from the same thread it was written on, and is guaranteed to be atomic. There are apparently no locks anywhere in the program, and so there are no lock ordering inversions. Two of the three methods are empty. And yet this program deadlocks with 100% certainty; the program “globally” is clearly not threadsafe, despite all those nice “local” properties. You can build a hollow house out of solid bricks; so too you can build a deadlocking program out of threadsafe methods.

The reason why this deadlocks is a consequence of the rules for static constructors in C#; the important rule is that a static constructor runs exactly zero or one times, and runs before a static method call or instance creation in its type. Therefore the static constructor of C must run to completion before Main starts. The CLR notes that C‘s static constructor is “in flight” on the main thread and calls it. The static constructor then starts up a new thread. When that thread starts, the CLR sees that a static method is about to be called on a type whose static constructor is “in flight” another thread. It immediately blocks the new thread so that the Initialize method will not start until the main thread finishes running the class constructor. The main thread blocks itself waiting for the new thread to complete, and now we have two threads each waiting for the other to complete.


Next time on FAIC: We’re opening up the new Coverity office in Seattle! After which, we’ll take a closer look at the uses and abuses of the static constructor.

About these ads

Defect spotting, part two

I had a great time hanging out with my colleagues Bob and Amie yesterday at the HUB, talking with students, spotting defects and handing out yo-yos. Thanks to all who came out, and to the SWE for putting on a great event.

To follow up on the puzzle I posted yesterday: the terrible flaw, which most people spotted right away, was that the expression geteuid != 0 was of course intended to be geteuid() != 0. The code as written compares the function pointer to null, which it never is, and therefore the right side is always true, and therefore the conditional falls into the “fail with an error” branch more often than it ought to. The program succeeds if the user really is root, or if they are “sudo” root. It is intended to succeed also if the user is “effectively” root, but it does not. Thank goodness in this case the program fails to a secure mode! It is not at all difficult to imagine a situation where such an accidental function pointer usage causes the program to fail into the insecure mode. In any event, Coverity’s checker catches this one. (And of course more modern languages like C# do not allow you to use methods in a context other than a call or delegate conversion.)

There are of course any number of other flaws in this fragment. First, it’s now considered bad form to check for root like this; rather, check to see if the user is granted an appropriate permission. Second, the code is hard to read if you do not know the convention that the root user gets magical id zero by default; the code could be much more self-documenting. And so on; several people made good observations in the comments.


Next time on FAIC: You can build a hollow house out of solid bricks, and you can build a deadlocking program out of threadsafe methods too.

Defect spotting at the HUB

The University of Washington chapter of the Society of Women Engineers is putting on their quarterly career fair today at the Husky Union Building, and I’ll be there with a couple of my fabulous Coverity colleagues. If you happen to be a UW student reading this and want to stop on by for a round of my favourite game, “Spot the Defect” please do. We’ll be presenting a whole pile of buggy code; some of the defects will be quite straightforward and some of them will be subtle, but they all have a lesson.

If you want to play along at home, here’s one of the easy ones; this code was in a real used-by-customers product when Coverity’s static analysis engine discovered it:

{
  ...
  if (!strcmp(argv[i], "-configure"))
  {
    if (getuid() != 0 && geteuid != 0)
    {
      puts("Only root can use -configure.n");
      exit(1);
    }
  }
  xf86DoConfigure = TRUE;
  xf86AllowMouseOpenFail = TRUE;
  return 1;
}

Can you spot the defect and describe its consequence?

Anyways, like I said, if you’re a UW student then stop on by the booth and we’ll chat. The fair is open 12:30 to 5:30; if you’re a SWE member then you get early admission and can come in any time after 12:00. Hope to see you there!


Next time on FAIC: The solution to today’s puzzle.

Integer division that rounds up

Integer arithmetic is surprisingly difficult to get right. There are usually lots of weird corner cases,[1. Pun intended.] and the temptation to use too-clever bit-twiddling tricks is high. Today I thought I’d give an example of how I approach solving integer arithmetic problems on those rare occasions when I have to. So let’s consider the problem of writing a C# method that divides two 32 bit integers such that the division rounds up if it is inexact.

Before creating a new kind of division, it is instructive to consider what the existing integer division operator does. Obviously it divides a dividend by a divisor and produces a quotient, but there are some corner cases. The most obvious corner case is when the divisor is zero. Less obvious is the case where the dividend is Int32.MinValue and the divisor is -1. The correct answer mathematically is one greater than Int32.MaxValue. And as I discussed in 2011, it’s not immediately clear whether -5 divided by 2 should have a quotient of -3 and a remainder of 1 or a quotient of -2 and a remainder of -1. Opinions vary.

The C# specification is mostly clear on all of these points. Summing up:

  • Division by zero results in an exception.
  • Division of Int32.MinValue by -1 either results in Int32.MinValue or an exception, at the discretion of the implementor.
  • If the divisor and dividend have the same sign then the result is zero or positive.
  • If the divisor and dividend have opposite signs then the result is zero or negative.
  • If the division is inexact then the quotient is rounded towards zero. That is, up if it is negative, and down if it is positive.

Now let’s use this as a model for our new method. That “implementation defined” point in there is a bit vexing, so let’s eliminate it. Our proposed method should have the following behaviour:

  • Division by zero results in an exception.
  • Division of Int32.MinValue by -1 results in an exception.
  • If the divisor and dividend have the same sign then the result is zero or positive.
  • If the divisor and dividend have opposite signs then the result is zero or negative.
  • If the division is inexact then the quotient is rounded up.

And now we have a specification from which we can write completely straightforward, boring code. Our implementation strategy will take advantage of the fact that the built-in operator does the heavy lifting; it already gives the desired result in the majority of cases, and only needs a small fixup in the few cases where it does not round up.

So when does regular division need to be fixed up? When the division is both inexact and rounded down. We can tell if the division was inexact if the remainder operator produces a non-zero result. But how can we tell if the division was rounded down?

We round towards zero, so the division was rounded down if the exact mathematical quotient was a positive fraction, and it was a positive fraction if the divisor and dividend were of the same sign. This is the point where people mess up. There is an easy way to determine if two numbers are of the same sign, and that’s to compare their signs. The moment you depart from that, you get into trouble. In particular, multiplying two 32 bit integers together and checking to see if the result is positive does not tell you that the two integers were of the same sign! Integer arithmetic overflows; if it throws an exception in a checked context then obviously you do not know whether the two numbers were of the same sign. If it overflows in an unchecked context then two positive multiplicands can produce a negative product. Don’t get cute; if you want to check if two things are the same then compare them for equality.

Enough chit-chat; let’s write some code.

public static int DivRoundUp(int dividend, int divisor)
{
  if (divisor == 0 ) 
    throw new DivideByZeroException();
  if (divisor == -1 && dividend == Int32.MinValue) 
    throw new OverflowException();
  int roundedTowardsZeroQuotient = dividend / divisor;
  bool dividedEvenly = (dividend % divisor) == 0;
  if (dividedEvenly) 
    return roundedTowardsZeroQuotient;

At this point we know that the divisor was not zero (because we would have thrown) and we know that the dividend was not zero (because there would have been no remainder). Therefore both are non-zero. Either they are of the same sign, or opposite signs. If they’re of opposite sign then rounding towards zero rounded up, so we’re done. If they’re of the same sign then rounding towards zero rounded down, so we must add one to the quotient.

We compare signs by comparing signs:

  bool wasRoundedDown = ((divisor > 0) == (dividend > 0));
  if (wasRoundedDown) 
    return roundedTowardsZeroQuotient + 1;
  else
    return roundedTowardsZeroQuotient;
}

Now, is this the shortest way to write this code? No. Is it the fastest? No. Is it the cleverest? Certainly not. But it is clear, logical, follows the structure of its specification closely, and therefore more likely to be correct than shorter, faster, cleverer code. This variation has fewer tokens, and is considerably more clever:

  bool wasRoundedDown = (divisor ^ dividend) >= 0; 

It also trades two comparisons for one xor and therefore might be a few nanoseconds faster; I don’t know. If this method turned out to be a bottleneck then you could experiment and find out. But I do know that the less clever version is a lot easier to read.


Next time on FAIC: I play Spot the Defect at the HUB.

Five-dollar words for programmers: elision

Good writers borrow from other writers. Great writers steal from them outright. — Aaron Sorkin

Sorkin was of course stealing outright from T.S. Eliot, who in 1920 wrote:

Immature poets imitate; mature poets steal; bad poets deface what they take, and good poets make it into something better, or at least something different. The good poet welds his theft into a whole of feeling which is unique, utterly different than that from which it is torn; the bad poet throws it into something which has no cohesion. A good poet will usually borrow from authors remote in time, or alien in language, or diverse in interest.

Sorkin’s pithy summary, or a slight modification of it, is often attributed to Pablo Picasso and Oscar Wilde, but I’ve found no reliable evidence that either man ever said anything like it. See this blog post for details.]

Computer programmers have a habit of taking jargon from one area of study and borrowing it for use in their own. Today’s five dollar word for programmers is elision; the verb form is to elide. This term from linguistics is used in two non-standard ways by computer programmers.

In natural language linguistics, elisions are the removal of sounds from spoken words that do not change their meaning. Saying

I’m gonna eat my vegtables.

instead of

I am going to eat my vegetables.

elides a great many sounds but does not change the meaning.

A less common usage of the term “elision” in natural linguistics is the omission of entire words. Many of my friends from Pennsylvania, for example, tend to elide the words “to be” from sentences that would require them in the Queen’s English, like “do you have any towels that need washed?”

It is in this sense that computer language designers have borrowed the term: an elision in an programming language is when the language makes optional what might be thought of as a necessary portion of the grammar.

There are a number of legal elisions in C#. For example, the accessibility modifier private may be elided almost everywhere; you can usually remove private from a declaration without changing the meaning of the program. All versions of C# allow you to elide the array bounds and array type in a local variable initializer. These are all the same:

int[] x = { 10, 20, 30 };
int[] x = new int[] { 10, 20, 30 };
int[] x = new int[3] { 10, 20, 30 };

And C# 3.0 and higher also allow

int[] x = new[] { 10, 20, 30 };

I said there were two ways in which computer programmers co-opt the word; the second is in optimization theory. As we just saw in my long series on optimizing lifted arithmetic, the optimizer is allowed to elide conversions if it knows that they are unnecessary: in an expression where an int? is added to an int, by rights we ought to be implicitly converting the non-nullable integer to a nullable integer. But the compiler knows that doing so is a pointless waste of time because the conversion can be removed without changing the meaning of the program. Or, as I described back in 2010, the compiler will sometimes elide the creation of a temporary when initializing a local variable of value type. This form of elision optimization is called copy elision.


Next time on FAIC: Integer arithmetic can be tricky. We’ll try dividing two numbers and see what happens.

Nullable micro-optimizations, part eight

Today, the answer to the puzzle I posed last week. The answer is no, that optimization is not legal in general, though it is legal in a few specific cases.

As I have discussed several times before, C# has very strict rules about what order operations happen in. A quick rule of thumb is: operands run left-to-right. In our example we had X() * Y() + Z(), so let’s see what conclusions we reach by applying this rule:

  • X() is an operand of the * to the left of Y(), so the call to X() must happen before the call to Y().
  • X()*Y() is an operand of the + to the left of Z(), so the multiplication must happen before the call to Z().

The compiler must generate code that computes X(), then Y(), then the multiplication, then Z(), and then the addition. But our proposed optimization was

int? r;
int? tempX = X();
int? tempY = Y();
int tempZ = Z();
r = tempX.HasValue & tempY.HasValue ?
  new int?(tempX.GetValueOrDefault() * tempY.GetValueOrDefault() + tempZ) :
  new int?();

which computes Z() before the multiplication.

So what’s the big deal? Multiplication of integers can throw an exception in a checked context. That exception should prevent Z() from being called in the first place should X() * Y() throw on the multiplication. This optimization is only valid in an unchecked context.

And of course it just gets worse if we start to consider lifted arithmetic on types other than int?. It’s a bad practice to write a user-defined operator with a side effect, but it is legal and the C# compiler must ensure that side effects of user-defined operators are observed to happen in the prescribed order.

Rather than try to figure out all the types for which this optimization is legal in various contexts, Roslyn makes no attempt to optimize this kind of binary operator. It generates a temporary int? for the multiplication, and then does the regular lifted arithmetic for the addition. Another lovely optimization spoiled by conflict with an lovely invariant.

But wait!

The problem is that the side effects of the multiplication must be observed to come before the side effects of the right addend. What if the right addend has no side effects? Then we can do this optimization!

The most common situation in which the right addend has no side effects is when it is a constant:

int? r = X() * Y() + 1;

This can legally be optimized to

int? r;
int? tempX = X();
int? tempY = Y();
r = tempX.HasValue & tempY.HasValue ?
  new int?(tempX.GetValueOrDefault() * tempY.GetValueOrDefault() + 1) :
  new int?();

And in fact I did add this optimization to Roslyn; just as unary operations and conversions can be “distributed”, so can binary operations where the right operand is a constant. Moreover, as a pleasant side effect, doing so allowed for an easy implementation of various optimizations that improve the quality of lifted x += 1 and x++ expressions.

Well, that took rather more episodes than I thought it would when I started! I could talk a bit more about how Roslyn optimizes other more exotic nullable arithmetic, like equality, inequality, and logical operations. I could also talk a bit about the stuff I didn’t get to implement before I left; Roslyn does a slightly worse job than the original recipe compiler when optimizing expressions where we know that the result is going to be null, but also must compute a side effect. But rather than go down those lengthy rabbit holes, I think I’ll call this series done at eight episodes.


Next time on FAIC: Some of these optimizations we’ve been talking about are elisions; I’ll talk a bit about how computer programmers have borrowed this term from linguistics, and how we use it in two different senses.

Hummingbird PSA

hummingbirdWe interrupt our adventures in lifted arithmetic optimization for a brief public service announcement on behalf of my little friend to the left there. (Click on the image for a larger version.) I took this photo at my home in Seattle this past Sunday, before I flew down to San Francisco for training at Coverity’s head office. (And it took a number of attempts to freeze the wings, even at 1/640th of a second shutter speed with my Canon DSLR.)¬†¬†Amazingly enough, male Anna’s Hummingbirds like my friend there will spend winters in Seattle. At the time this photo was taken it was well below freezing. Apparently more and more male hummingbirds are staying in Seattle over the winter, as climate change makes the winters here slightly more survivable; even Rufous and Allen’s hummingbirds are being observed. They migrate out of the hills to the warmer lowlands, but don’t go farther.

Hummingbirds are always within mere moments of starving to death and in the Seattle winter there are far fewer sources of food than other times of the year. (Anna’s Hummingbirds are unusual in that they can eat bugs while in flight, which does help.) If you’ve got hummingbirds coming to a feeder, keep it stocked. Take it in at night; I found my feeder frozen solid and a very vexed hummingbird trying in vain to feed one morning.

A number of people have asked me where they can get my feeder; you can find it here.

You’ll notice that in my feeder the liquid is clear and colourless. It is one part sugar to three or four parts water, and that’s it. Stores will try to sell you hummingbird food at enormously inflated prices, but what they’re selling is sugar mixed with red dye. The red dye does nothing to attract the birds, and it is bad for them. Their little kidneys have to remove the dye from their bloodstream.

I’m in California again next week, but I’ve queued up a couple of articles to go out while I’m away. Have a great weekend, and tune in Monday for the final episode in my excessively long series on lifted arithmetic optimization.