Monitor madness, part two

In the previous exciting episode I ended on a cliffhanger; why did I put a loop around each wait? In the consumer, for example, I said:

    while (myQueue.IsEmpty)
      Monitor.Wait(myLock); 

It seems like I could replace that “while” with an “if”. Let’s consider some scenarios. I’ll consider just the scenario for the loop in the consumer, but of course similar scenarios apply mutatis mutandis for the producer. Continue reading

Real world async/await defects

Today I’m once again asking for your help.

The headliner feature for C# 5 was of course the await operator for asynchronous methods. The intention of the feature was to make it easier to write asynchronous programs where the program text is not a mass of inside-out code that emphasizes the mechanisms of asynchrony, but rather straightforward, easy-to-read code; the compiler can deal with the mess of turning the code into a form of continuation passing style.

However, asynchrony is still hard to understand, and making methods that allow other code to run before their postconditions are met makes for all manner of interesting possible bugs. There are a lot of great articles out there giving good advice on how to avoid some of the pitfalls, such as:

These are all great advice, but it is not always clear which of these potential defects are merely theoretical, and which you have seen and fixed in actual production code. That’s what I am very interested to learn from you all: what mistakes were made by real people, how were they discovered, and what was the fix?

Here’s an example of what I mean. This defect was found in real-world code; obviously the extraneous details have been removed:

Frob GetFrob()
{
  Frob result = null;
  var networkDevice = new NetworkDevice();
  networkDevice.OnDownload += 
    async (state) => { result = await Frob.DeserializeStateAsync(state); };
  networkDevice.GetSerializedState(); // Synchronous
  return result; 
}

The network device synchronously downloads the serialized state of a Frob. When it is done, the delegate stored in OnDownload runs synchronously, and is passed the state that was just downloaded. But since it is itself asynchronous, the event handler starts deserializing the state asynchronously, and returns immediately. We now have a race between GetFrob returning null, and the mutation of closed-over local result, a race almost certainly won by returning null.

If you’d rather not leave comments here — and frankly, the comment system isn’t much good for code snippets anyways — feel free to email me at eric@lippert.com. If I get some good examples I’ll follow up with a post describing the defects.

Reordering optimizations

In a previous episode I discussed whether it is a good idea to remove a lock which protects an integer field. I said that without a lock, the read of a field can be moved arbitrarily far backwards in time on a thread, and that this can cause unexpected behaviour in a program. It is perhaps not clear why this can be a problem; it seems like a read shouldn’t have any side effects that can be noticed if it happens earlier. A number of readers asked whether making the read into a volatile read prevented the reordering problem.

The answer to the general question of whether a volatile read can be re-ordered is yes, yes it can. Though a volatile read in a loop cannot be cached once and elided, a volatile read can be moved backwards in time with respect to a volatile write.

The answer to the more specific question of whether making the field volatile makes the program I was given correct cannot really be answered because that was an oversimplified toy program. Deducing the correctness of a real program by analyzing a toy program is perhaps a bad idea.

Instead, I’ll give you an example that (1) can actually happen even on the more restrictive memory model imposed by x86 hardware, and (2) heck let’s make everything volatile while we’re at it; it won’t help! We’ll elide a bunch of locks and see what goes wrong. Check out this program fragment:

static volatile bool q = false;
static volatile bool r = false;
static volatile bool s = false;
static volatile bool t = false;
static object locker = new object();

static bool GetR() { return r; }  // No lock!
static void SetR() { lock(locker) { r = true; } }

static void MethodOne()
{
  q = true;
  if (!GetR())
    s = true;
}

static void MethodTwo()
{
  SetR();
  if (!q)
    t = true;
}

The rest of the program, which I have not shown, behaves as follows. First, the static initializers run normally, so the four Booleans and the monitor are all created and assigned to their given values. Then the program creates two threads. Thread one runs MethodOne and thread two runs MethodTwo. Both threads run to completion and these are the only invocations of the two methods. The question is: can the original thread now observe s and t to both be true?

Give that some thought before you read on.

.
.
.
.
.
.

It would appear that the answer is no, by the following logic.

  • If one thread runs q=true; before the other runs if(!q) then t remains false.
  • Otherwise, one thread runs q=true; after the other runs if(!q). Clearly that implies that if(!GetR()) must have run after SetR();. That implies that !GetR() is false. Therefore s remains false.
  • There are only these two possible cases and in each at least one of s and t remains false, therefore it is impossible that both become true.

This logic is wrong; specifically the second statement is completely bogus and therefore the conclusion does not follow. As is often the case with my puzzles, the portion of the analysis marked “clearly” is the flag that indicates the error.

The lack of a lock on the read of r means that the CPU may re-order the read by moving it arbitrarily far backwards in time even with respect to volatile writes, and an x86 will occasionally do so if the stars align correctly! An x86 will not reorder two reads with respect to each other and will not reorder two writes with respect to each other, but it has no problem reordering a read of one variable with respect to a write of another. The CPU is permitted to move the read backwards in time by pretending that you actually wrote:

static void MethodOne()
{
  bool temp = r;
  q = true;
  if (!temp)
    s = true;
}

Since this optimization is invisible to the code on the current thread, it is legal. But now there is an obvious interleaving in which both s and t become true. First we assign false to temp on one thread, then all of MethodTwo runs on the other thread, so t is true, and then the remainder of MethodOne sets s to true.

Now imagine that your program depends on s and t never being both true, and this situation happens let’s say one time out of every billion executions of this code. If it runs a thousand times a day on a thousand machines that’s one failure in every three years. How on earth would you debug it? You’d be likely to suspect hardware failure rather than a bug in the program, but there assuredly is a bug in the program. These are the nightmare scenarios you run into when you think you’re clever enough to elide locks. The moment you depart even slightly from one of the “blessed” low-lock patterns you are off in the weeds.

The right thing to do here is first, if you can avoid the shared memory situation in the first place, do so. If you cannot, lock everything all the time. In this particular case, locking the read of r or the write of q would likely be sufficient to ensure that s and t are never both true, but I discourage the attitude that walking as close as possible to the edge of a cliff is a great way to be safe. Walk as far away from that cliff as you possibly can! If you’ve got shared memory then put a lock around all usages of it. Simply making a shared variable volatile is insufficient to ensure thread safety in all cases.


Thanks to threading expert Joe Duffy for bringing this scenario to my attention many years ago.

Can I skip the lock when reading an integer?

Today, a question from a Coverity customer:

Here is a greatly simplified version of our code:

public class TestLock
{
  private object threadLock = new object();
  private int value = 0;
  public void Start()
  {
    lock (threadLock) { value = 100; }
  }
  public void Finish()
  {
    lock (threadLock)
    {
      if (value != 0 )
        value = 0;
    }
  }
  public void Increment()
  {
    lock (threadLock)
    {
      if (value != 0) 
        value += 10;
    }
  }
  public int GetValue()
  {
    return value; // No lock!
  }
}

The start, finish and increment operations are all either mutating or non-atomic operations, or both, so they are locked. But the get-value operation is guaranteed by the C# specification to be atomic, so we figure that we can safely remove the lock. Is this correct?

The short answer is: Please do not elide the lock regardless of its correctness. It’s dangerous and the benefit is absurdly tiny compared to the risk.

The slightly longer answer is: The fact that you’re asking the question in the first place means that you don’t have a strong enough understanding of the memory model of C# in order to correctly write low-lock code, so it should be avoided. In fact, I’m not a big fan of multithreaded code in the first place. If you can avoid writing multithreaded code, do so. If you cannot, then use the highest level of abstraction available to you: use tools like the Task Parallel Library to efficiently manage worker threads on your behalf. If you cannot do that, then at least avoid sharing memory across threads. If you cannot do that, then at least lock all the memory that you do share across threads. Writing low-lock code should be the last thing you consider doing, not the first.

The attentive reader will note that neither of those two “answers” actually answered the question. It’s time for the verbose answer!

First off, let’s talk about atomicity.

You are correct that the C# specification promises that reads and writes of 32 bit integer fields are atomic. That is, they cannot be “torn”. A write to a 64 bit integer variable can be treated as two independent writes to two adjacent 32 bit variables, and therefore two threads that are both trying to assign values to those two adjacent variables at the same time can race. It is possible to end up with a value that is half from the first thread and half from the second thread, which is very confusing. The 32, 16, and 8 bit integer types and the bool and char types are all guaranteed to be atomic for individual reads and writes, provided that the developer has not done something silly like force the integer to be misaligned.

You are also correct that more complex operations, such as test-and-set or increment are not atomic, and therefore need to be protected against races.

So the unlocked read will at least be atomic in the sense that we never get a “torn” value out. Now, if we had something like:

    lock (threadLock)
    {
        value += 5;
        DoSomething();
        value += 5;
    }

then the unlocked access is immediately suspicious because it can read a value after the first increment but before the second increment, and that might be unexpected. Let’s assume that you are not in that situation.

Now let’s talk about locks. Locks provide two guarantees in C#.

First, there is the obvious purpose of a lock. The locked object is a monitor; only one thread may have a given monitor locked at a time. A thread which attempts to take a monitor that has been acquired by another thread will block until it can obtain access.

There is a second guarantee of a lock that is more subtle. The C# compiler, JIT compiler and CPU are all permitted to make optimizations provided that the optimization is undetectable by the current thread. In particular, reads and writes of variables may be re-ordered or eliminated entirely provided that doing so is not observable to the current thread. For example, consider the program fragment:

someInteger = someOtherInteger;
someIntegerJustChanged = true;

Given that there is no code between the two statements there is no way for the thread that runs this code to detect what order the mutations happen in. It would be perfectly legal for these two mutations to be re-ordered if the CPU decided that it was more efficient to do so. The current thread cannot tell the difference, but another thread could observe the mutations to happen in the “wrong” order. (I note that on x86-based hardware this particular reordering of writes is actually never observed; those CPUs do not perform this optimization. There is no requirement that all C# programs be run only on x86-based hardware!)

A lock introduces a restriction on these optimizations. In particular: no read may be moved before the beginning of a lock statement, and no write may be moved after the end of a lock statement. Let’s compare these two pieces of code:

  public int GetValue()
  {
    return value; // No lock!
  }

vs

  public int GetValue()
  {
    lock(threadLock) { return value; }
  }

What’s the difference?

In the latter version the CPU may not move the read backwards past the beginning of the lock, and may not move any write that happened before the method was entered to a point after the end of the lock.

In the former, the CPU may as an optimization choose to move the read arbitrarily far “backwards in time” with respect to other reads and writes on this thread provided that doing so is not detectable by the current thread. But that fact could be observed by another thread, and since we know that this program definitely reads memory locations on multiple threads, I am worried. Perhaps some other thread is expecting that the read of this value will be observed to occur in a particular order with respect to the write of some other value. You said that this is a greatly simplified version of the real program, and therefore we cannot reason from the correctness of this program to the correctness of the original program! There are a number of scenarios:

Scenario one: The authors of the code do not know that their program is correct without the lock, and do not have a solid customer-impacting performance reason to elide the lock, but are doing so anyways just for the heck of it. Fortunately for them, by sheer luck their code does not actually have a customer-impacting bug, and equally fortunately, by sheer luck no future edit to the code will introduce such a bug. Also, no future update to the C# compiler, JIT compiler or CPU optimization will ever introduce a bug either.

Scenario two: The original program has a subtle bug, or a bug will be introduced in the future via a code change, compiler update, and so on. Some other thread assumes that the unlocked read will not be moved backwards in time with respect to some other read or write, but in fact this optimization is not only permitted, but will be performed. This fact becomes observable on a thread that then behaves incorrectly as a result. The code therefore has a bug that is difficult to detect and practically impossible to reproduce.

Scenario three: The authors of the original code, who we recall are asking the question about thread safety to begin with, are actually asking a rhetorical question. They fully understand all the ramifications of every possible CPU reordering and have analyzed all those possible reorderings to determine that there is never a bug in their program under any of those conditions. Moreover, they wish to avoid the roughly 10 or 20 nanosecond penalty of the uncontested lock for performance reasons, because that one-hundred-millionth of a second penalty both observable by and unacceptable to their users. And finally, the authors will carefully review every single change ever made to this program in the future with an eye to the possible bugs introduced by reordering optimizations. Avoiding that 10 ns penalty is worth the enormous cost in expert code reviews and the high risk of getting it wrong.

My guess is that the developers are 99% likely to be in scenario one, 0.99% likely to be in scenario two, and 0.01% likely to be in scenario three. I also hazard a guess that the developers have no solid customer-impacting performance reason to elide that lock. Is the expected benefit of a 10 ns savings that no customer will notice really worth the cost, that by chance some impossible-to-detect-and-reproduce bug is in this program? I doubt it. Therefore my recommendation would be to always lock every access to a field that can be shared across threads, even if you think that eliding the lock is 99.99% likely to be harmless. The nano-scale benefit is simply not worth the macro-scale costs of verifying that the elision is correct, and certainly not worth the risk that the code is incorrect now or might become incorrect in the future. Code which avoids locks is extremely difficult to reason about, so make your life easier: if you can, don’t write shared memory code to begin with. If you have to, use locks consistently.


UPDATE: This article was originally posted on the Coverity Dev Testing Blog and has since been deleted; a number of commenters on the original article asked if marking the field volatile magically prevents reordering bugs.

The specific problem with that proposal is that volatile reads and locks do not have the same semantics. A volatile read can be moved backwards in time with respect to a volatile write, and the x86 processor will actually do so, but a read, volatile or otherwise, cannot be moved backwards in time past the beginning of a lock.

The more general problem is: we have a toy example that is greatly simplified from the real code, and therefore we don’t know what invariants the real code relies upon. Trying to deducing whether a real gun is safe by examining a toy gun is a dangerous proposition.

In an upcoming article I will give an example where every variable is volatile, but eliding locks on some of them can produce an unexpected violation of what appears to be a program invariant. 

 

ATBG: How do exceptions interact with the “using” statement?

Today on the Coverity Development Testing Blog‘s continuing series Ask The Bug Guys, I answer a question I get quite frequently: what guarantees do we have about object disposal when the body of a using block is interrupted by an exception? The situation is rather complicated, it turns out.

As always, if you have questions about a bug you’ve found in a C, C++, C# or Java program that you think would make a good episode of ATBG, please send your question along with a small reproducer of the problem to TheBugGuys@Coverity.com. We cannot promise to answer every question or solve every problem, but we’ll take a selection of the best questions that we can answer and address them on the dev testing blog every couple of weeks.

Construction destruction

Take a look at this little program skeleton:

class Foo 
{ 
    private int x;
    private int y;
    public Foo(int x, int y) 
    {
        this.x = x;
        this.y = y;
        SideEffects.Alpha(); // Notice: does not use "this"
    }
    ~Foo() 
    { 
        SideEffects.Charlie(); 
    }
}
static class SideEffects
{
    public static void Alpha() { ... }
    public static void Bravo() { ... }
    public static void Charlie() { ... }
    public static void M()
    {
        Foo foo = new Foo(1, 2); 
        Bravo();
    }
}

Let’s suppose we have three side effects: Alpha, Bravo and Charlie. What precisely they are is not important. The question is: what do we know about the order in which Alpha, Bravo and Charlie execute when a call to M() occurs? Continue reading

The no-lock deadlock

People sometimes ask me if there is a cheap-and-easy way to guarantee thread safety. For example, “if my method only reads and writes local variables and parameters, can I guarantee that my method is threadsafe?” Questions like that are dangerous because they are predicated on an incorrect assumption: that if every method of a program is “threadsafe”, whatever that means, then the entire program is “threadsafe”. I might not be entirely clear on what “threadsafe” means, but I do know one thing about it: thread safety is a property of entire programs, not of individual methods.

To illustrate why these sorts of questions are non-starters, today I present to you the world’s simplest deadlocking C# program:

class C
{
  static C() 
  {
    // Let's run the initialization on another thread!
    var thread = new System.Threading.Thread(Initialize);
    thread.Start();
    thread.Join();
  }
  static void Initialize() { }
  static void Main() { }
}

(Thanks to my Roslyn compiler team colleague Neal Gafter for this example, which was adapted from his book Java Puzzlers.)

At first glance clearly ever method of this incredibly simple program is “threadsafe”. There is only a single variable anywhere in the program; it is local, is written once, is written before it is read, is read from the same thread it was written on, and is guaranteed to be atomic. There are apparently no locks anywhere in the program, and so there are no lock ordering inversions. Two of the three methods are empty. And yet this program deadlocks with 100% certainty; the program “globally” is clearly not threadsafe, despite all those nice “local” properties. You can build a hollow house out of solid bricks; so too you can build a deadlocking program out of threadsafe methods.

The reason why this deadlocks is a consequence of the rules for static constructors in C#; the important rule is that a static constructor runs exactly zero or one times, and runs before a static method call or instance creation in its type. Therefore the static constructor of C must run to completion before Main starts. The CLR notes that C‘s static constructor is “in flight” on the main thread and calls it. The static constructor then starts up a new thread. When that thread starts, the CLR sees that a static method is about to be called on a type whose static constructor is “in flight” another thread. It immediately blocks the new thread so that the Initialize method will not start until the main thread finishes running the class constructor. The main thread blocks itself waiting for the new thread to complete, and now we have two threads each waiting for the other to complete.


Next time on FAIC: We’re opening up the new Coverity office in Seattle! After which, we’ll take a closer look at the uses and abuses of the static constructor.

What is the defining characteristic of a local variable?

If you ask a dozen C# developers what a “local variable” is, you might get a dozen different answers. A common answer is of course that a local is “a storage location on the stack”. But that is describing a local in terms of its implementation details; there is nothing in the C# language that requires that locals be stored on a data structure called “the stack”, or that there be one stack per thread. (And of course, locals are often stored in registers, and registers are not the stack.)

A less implementation-detail-laden answer might be that a local variable is a variable whose storage location is “allocated from the temporary store”. That is, a local variable is a variable whose lifetime is known to be short; the local’s lifetime ends when control leaves the code associated with the local’s declaration space.

That too, however, is a lie. The C# specification is surprisingly vague about the lifetime of an “ordinary” local variable, noting that its lifetime is only kinda-sorta that length. The jitter’s optimizer is permitted broad latitude in its determination of local lifetime; a local can be cleaned up early or late. The specification also notes that the lifetimes of some local variables are necessarily extended beyond the point where control leaves the method body containing the local declaration. Locals declared in an iterator block, for instance, live on even after control has left the iterator block; they might die only when the iterator is itself collected. Locals that are closed-over outer variables of a lambda are the same way; they live at least as long as the delegate that closes over them. And in the upcoming version of C#, locals declared in async blocks will also have extended lifetimes; when the async method returns to its caller upon encountering an “await”, the locals live on and are still there when the method resumes. (And since it might not resume on the same thread, in some bizarre situations, the locals had better not be stored on the stack!)

So if locals are not “variables on the stack” and locals are not “short lifetime variables” then what are locals?

The answer is of course staring you in the face. The defining characteristic of a local is that it can only be accessed by name in the block which declares it; it is local to a block. What makes a local truly unique is that it can only be a private implementation detail of a method body. The name of that local is never of any use to code lexically outside of the method body.