The psychology of C# analysis

developersThe organizers of the recent Static Analysis Symposium1 were kind enough to invite me to give the opening talk. Now, this is a conference where the presentations have titles like "Efficient Generation of Correctness Certificates for the Abstract Domain of Polyhedra"; I know what all those words mean individually, it's just them next to each other in that order that I don't understand. Fortunately for me, the SAS organizers invite people in industry to give talks about the less academic, more pragmatic aspects of program analysis, which I was happy to do.

They also let me pad my presentation with funny pictures of cats, which helped a lot.

Unfortunately I don't have a recording of the talk, but my slides are posted here if you want to check them out.

Special thanks to Scott Meyer of BasicInstructions.net who was kind enough to allow me to use his comic about informative presentations in my informative presentation.

  1. Conveniently held four blocks from my office.

17 thoughts on “The psychology of C# analysis

    • It is not; our target market is software companies, not individual developers.

      However, Coverity's C/C++ and Java analysis is available for free for registered open source products. (C# analysis is unfortunately not offered at this time, though I hope that we can start offering that service at some point.) See http://scan.coverity.com/ for details.

  1. Hmm..when you say "I don't have a recording" does that mean that you personally don't have one,but one exists, or that one exists and you don't know about it, or that one does exist and maybe someone you do know has it or, well, is there any chance ever we'll get to see the presentation? The slide deck alone was brilliant!

  2. "I know what all those words mean individually, it's just them next to each other in that order that I don't understand."

    This is without a doubt one of the best ways I've ever read to say that. I am going to quote you on this henceforth.

  3. Sounds like static analysis faces the same issues we as programmers face when trying to get people out of wasteful "traditional" techniques. Things are viewed as OK because they think it works, but no one cares to notice the elephant in the room.

  4. As a recent purchaser of Coverity C# analysis, I say, "bring on the 'churn'!" I would be ecstatic if our Coverity defect count jumped from 150 (with 34% false positives) to 1500 (but hopefully with a lower FP rate).

    Raygun (http://raygun.io) is reporting the crashes so I know they're there. I'd love to find them before the customers do.

    Please don't dumb the tools down by assuming that we'll be upset if the new version works better (and reports more defects). At our company, developers buy development tools, and we don't perversely incentivise management to keep some arbitrary reported number low. I want to fix crashes and data corruption bugs before they're shipped to customers, and I'd love it if Coverity would help me do that.

  5. At a company I used to work at we discussed FindBugs one day. I said it was great because for projects it had never been run on, almost 50% of the issues it reports are actual, we-should-fix-this bugs (not just discounting false positives but also reports of stuff that's technically wrong but not that important to fix).
    A coworker dismissed it because he thought that rate was not good enough to bother.
    I didn't - and still don't - understand that attitude. It usually doesn't take long to check an item and decide it's a false positive or something you don't want to spend time fixing, and if I can eliminate 10 actual bugs by looking through 20 or even 50 items seems like a good use of time.

    • OK, now suppose that instead of eliminating ten bugs out of twenty or fifty, you're eliminating a thousand bugs out of five thousand reports, four thousand of which are a waste of time. What's the cost, and how many new features could that money buy?

      • I see the point, but our projects were small enough that you could check (though perhaps not fix) all reports for one in an afternoon, IIRC.

        Only a fraction of those bugs will actually be noticed if they are not fixed, but that event could have big costs attached (say if it occurs during a presentation, or if user data gets corrupted), and even if they don't, tracking the bug down by its symptoms can be much more time-consuming than finding it through static analysis. I would assume that adding new features will also be easier once you eliminate those bugs.

        Additionally, this is something I like to do when my current task is to familiarize myself with a project I didn't work on yet, since you get lots of little tasks spread all over the code and so get a bit of a tour :) .

        I don't have any actual data to know when the effort exceeds the benefits though. It depends a lot on the importance of the project, on the consequences of failure, and the expected lifetime of the project. Do you have a rule of thumb?

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>