ATBG: Can I skip the lock when reading an integer?

Today on the Coverity Development Testing Blog‘s continuing series Ask The Bug Guys, I answer a question that made its way to me from a Coverity customer: is it a good idea to remove a lock which only protects the read of an integer field? After all, that read is guaranteed to be atomic, so it seems safe. As is usually the case, the situation is unexpectedly complicated!  (Wait… is “usually unexpected” an oxymoron?)

UPDATE: A number of commenters have asked if marking the field volatile magically prevents reordering bugs. The specific problem with that proposal is that volatile reads and locks do not have the same semantics. A volatile read can be moved backwards in time with respect to a volatile write, and the x86 processor will actually do so, but a read, volatile or otherwise, cannot be moved backwards in time past the beginning of a lock. The more general problem is: we have a toy example that is greatly simplified from the real code, and therefore we don’t know what invariants the real code relies upon. Trying to deducing whether a real gun is safe by examining a toy gun is a dangerous proposition.

I have posted a follow-up article to address some of these concerns.

As always, if you have questions about a bug you’ve found in a C, C++, C# or Java program that you think would make a good episode of ATBG, please send your question along with a small reproducer of the problem to We cannot promise to answer every question or solve every problem, but we’ll take a selection of the best questions that we can answer and address them on the dev testing blog every couple of weeks.

24 thoughts on “ATBG: Can I skip the lock when reading an integer?

  1. I think the point you make that you should always take the safe route and use the lock is a good one. However, I think you go a bit overboard on the rhetoric implying that there is no possible performance reason for not using locks.

    Use of locks serialize access to a resource, so if you have some resource being used by multiple threads and there is a lot of lock contention then you’ll effectively get multi-threaded code that is providing no performance benefit over a single threaded version. I think this a much more likely reason that developers try to avoid using locks, not because they think the 10ns cost of taking an uncontested lock is significant.

    I think addressing that scenario in particular (contested locks) would be an interesting blog post.

    • I hear that response a lot, and my response to that is: if you have a problem with contention on a lock that reads a single field then you are highly likely to have some really bad architectural flaw in your application. Find that flaw and fix it!

      If the problem is “I’m spending too much time at stoplights” then the solution is not to start running red lights and hoping for the best; it’s to build a freeway that doesn’t have stoplights in the first place.

      • A reasonable reaction to “I’m spending too much time waiting at red stop signs” may be to look before before crossing the last road prior to an intersection to see whether there seems to be significant cross traffic, and to try an alternate route if the cross traffic looks excessive. Observations of cross-traffic while one is a block away will have both false positives and false negatives, but if one stops before actually proceeding into an intersection even if one didn’t notice cross traffic on the approach, false negatives won’t pose any danger. False positives and false negatives will degrade efficiency, but if both true positives and true negatives are sufficiently common, they may improve efficiency more than the false observations degrade it.

      • Using a monitor to protect a single field is truly the equivalent of using a cannon to kill a fly. Other lighter solutions exists that are more suited, but still most of this won’t actually matter as the lock time is very short (in fact in no contention mode there is no cond var used and simply Interlocked op is used).

        So if one wants something lighter then memory barriers can be used or simply an interlocked op, but this is much more complicated and error prone and if such performance is needed why this is done on a GC(stop the world) platform ? :).

  2. Actually there’s the not-particularly-rare example of a statistical counter – but there the advantage is removing the lock on the writer side, not the reader side (however, the example you posted the writers are already serialised so it won’t change anything).

    Also, how do you have 20ns locks when an atomic op takes 40?

    • I was not intending to give an exact cost, which depends on all kinds of factors and implementation details of the chip, operating system and runtime.

      My point is that the cost is low enough that in a world where our apps routinely download billion-byte files over transcontinental tables, a penalty on the order of magnitude of nanoseconds is not what stands between your application and success in the marketplace.

    • For statistical counters you can always use a volatile int, with Interlocked.Increment and/or Interlocked.Decrement to modify it. Reads of such a vloatile int are okay “as-is”, as long as you don’t make any executive decisions based on it.

    • That’s what pipelining is for. It goes down to the circuit level. If your logic gates are arranged just right, you can begin feeding the next op in before the previous op finishes.

      I have an adder in my personal Minecraft server which I can pump pairs of numbers into it at 2 tick intervals but it takes 6 ticks to complete any given one of them. The inputs are buffered and the outputs cached. The pipeline tieing it into everything else is build with this in mind for every component it interfaces with.

    • I think “usually unexpected” isn’t an oxymoron because then “usually” is describing the frequency at which it is unexpected, as in it is some times expected and some times unexpected. There is no contradiction (to me at least) between the description of frequency and its unexpectedness.

      In the context itself, “usually” is describing the frequency when it is *the case* of being unexpectedly complicated, so in that case (heh) the two are not even directly connected.

      But I think that’s enough grammar nazism for today.

  3. I have another question regarding guarantees, if a method contains a lock, may I assume no value read before the method call will be cached and assume no access will be reordered across that method call?

    In a pratical example, does ReaderWriterLockSlim have the same guarantees as the lock statement?

    If “yes” does those rules are valid for every method call or the compiler/runtime “mark” methods with locks?

    • The only guarantee you get about the lock is the one I said: reads cannot move backwards in time from any point after the lock is entered to before the lock is entered. Writes cannot move forwards in time from any point before the lock is exited to any point after the lock is exited. That’s the guarantee you get.

      ReaderWriteLockSlim does have the same effect: a read is not moved backwards in time to before the lock being entered, and a write does not move forward in time to after the lock is exited. This is not guaranteed by the C# language of course, but it would be crazy for the implementers of the lock to not have the same semantics.

      These semantics are provided by the implementation of the method.

      • Thanks, I made a small test program:

            class Program
                static void Main(string[] args)
                    int x = 0;
                    int? y = 0;
                    int z;
                    ThreadPool.QueueUserWorkItem(o => { Thread.Sleep(1000); x = 1; });
                    while (x == 0)
                        z = y.GetValueOrDefault();

        Compiling with “Release” and executing without debuging it never ends, infinite loop, but, if I replace GetValueOrDefault() with Value than it exits after the 1 second wait.

        About current implementation is correct to assume it will:
        1) Inline the GetValueOrDefault() method call;
        2) Not inline the Value property;
        3) Not cache values over non-inlined method calls;
        4) None of above is actually guaranted by the specification.

        ps: I am not a native English speaker and not very communicative even in my native language.

        • You can’t make those assumptions. The relevant part of Eric’s answer is regarding reads/writes being moved around as long as they appear to behave the same from the perspective of a single thread.

          The reason you are potentially seeing an infinite loop is that your loop MIGHT be optimized into something akin to a “while(true)” because, as far as a single thread can tell, that value is never directly changed and so using the equivalent to “while(0 == 0)” would appear to behave the same regardless of whether it was cached or retrieved fresh every time.

          Even if that variable is changed by another thread, the original thread is free to continue referring to a cached value instead of going back and getting the ‘real’ value.

        • First of all: the behaviour of looping forever is permitted because as I said before, the optimizer can choose to re-order or eliminate reads and writes if doing so cannot make a difference in a single-threaded program, which is clearly the case here.

          To address your specific questions: it is in theory never valid to assume that the optimizer will or will not do any particular thing; that’s an implementation detail of the runtime subject to change at any time.

          In practice, the answers to your questions are:

          1) z = y.GetValueOrDefault() is equivalent to z = y.hasValue ? y.value : 0 which is simple enough to inline.

          2) z = y.Value is equivalent to if (!y.hasValue) throw ...; z = y.value. Because there is a throw in there, the jitter is far less likely to try to intern it.

          3) This is a good guess; I don’t know the answer one way or the other.

          4) None of this is guaranteed in any way. This is all implementation detail, as I said.

          PS: Your English is fine!

          • Thanks Chris Hannon And Eric Lippert for answers, don’t worry, I am not going to make a nuclear power plant control software based on those assumptions, I just wanted to learn more about how the current implementation of CLR actually works.

  4. There are some contexts in which reading a field without a lock may be somewhat analogous to the IsAlive property of a WeakReference, which can’t really say that a reference is alive, but it might be able to say if it’s dead. If a field reports that data “probably” isn’t ready, and if code can do something more useful than wait for the data to become ready, checking the field without the lock may be helpful, especially if the lock might be held for awhile by whatever is trying to get the data.

  5. Some additional info regarding C# memory model and optimization can be here: (does this guy still work for Microsoft? If yes, why no more recent blog posts? But that’s BTW).
    Also, is ‘usually unexpected’ an oxymoron? Well, I’m not much of a linguist, but I’d say that if it can be substituted by ‘normally not expected’ or somesuch, then it should be Ok . 🙂

    • And in the article, it’s not ‘usually unexpected’, but rather, ‘usually complicated’. That’s definitely fine, even if the complications are unexpected! 🙂

  6. Pingback: Dew Drop – March 13, 2014 (#1742) | Morning Dew

  7. Hi there this is kinda of off topic but I was
    wondering if blogs use WYSIWYG editors or if you have to manually code with HTML.

    I’m starting a blog soon but have no coding know-how so I wanted to
    get advice from someone with experience. Any help would be enormously appreciated!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s