Finishing off my series of questions people asked during my recent webcast that I didn’t have time to answer, some short Q&A:
Does the analyzer require that we share our code assets with you?
No; the analyzer and defect database is local to your network. The considerable majority of customers have strict policies about who gets to see code assets. There are some scenarios in which you might want to ship your code to us for analysis: open source projects, for example, where there is no concern about privacy of the code. I’ll discuss these scenarios in more detail in future blog posts.
Can code analysis be used as a gate on checkins? That is, can we say that if analysis finds a new defect then the checkin is rejected?
Yes. The traditional model is to run the analyzer nightly and discover new defects checked in that day, but some customers want stricter gates on their checkins. We have seen that pretty much every such customer has their own custom-built system for gating checkins, so it is a bit tricky to do this “out of the box”. However we have a lot of experience helping customers integrate the analyzer into their systems.
Does the analyzer work well with “inversion of control”, “dependency injection”, “event driven programming” and similar patterns?
It works reasonably well. Many of these design patterns intend to facilitate decoupled systems, where two pieces of code communicate over an abstract but well-defined boundary that hides the implementation details. Living in a decoupled world means that you have some uncertainty about what code you’re actually calling; if you can’t figure out what code you’re calling then the static analyzer is going to have difficulty as well. That said, the analyzer has excellent heuristics for making guesses about what code is going to actually run when you invoke a method on an interface, virtual method or delegate.
Do statistical checkers consider statistics deduced from libraries?
I mentioned in my talk that some of the “checkers” — the algorithms which attempt to find defects in source code — use statistical rather than logical methods to deduce a potential defect. For example, if we see that 19 times out of 20, a method override void M()
calls base.M()
, then the 20th time is likely to be a mistake. If we see that 19 times out of 20, a field is accessed under a particular lock, then the 20th time is likely to be a mistake. And so on.
Yes, statistics are deduced from library code where you only have the assembly, not the source code. However, defects are not reported in library code, because how would you fix them? You don’t have the source code.
Is there more than just finding defects?
In my webcast I talked almost entirely about the analyzer, because that’s what I work on. Finding great defects is definately the heart of the product, but finding the defects is just the beginning. We’ve got sophisticated tools for managing new and legacy defects, deducing what code is not adequately tested, deducing what tests need to be run on a given change, reporting on progress against quality goals, detecting violations of quality metrics policies, and so on.
Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1657