Casting: making a green sand mold

Today another episode in my seldom-updated series about building a home aluminum foundry.

The technique I use for casting aluminum is called “green sand” casting not because the sand is green (though the sand I use is in fact slightly olive coloured) but because the sand is moistened with water and clay rather than oil. I made the sand myself; it’s a mixture of about ten parts olivine sand to one part finely powdered bentonite clay, and then “tempered” with water until it feels right. (Use a spray bottle set to a fine mist and stir the sand as you temper it.) It should feel like perfect sand castle building material: wet enough to hold its shape but not so wet that you can squeeze water out of it. If you can make a “snowball” of sand with a fist and break it cleanly in half, that’s probably good. Continue reading

About these ads

Short questions

Finishing off my series of questions people asked during my recent webcast that I didn’t have time to answer, some short Q&A:

Does the analyzer require that we share our code assets with you?

No; the analyzer and defect database is local to your network. The considerable majority of customers have strict policies about who gets to see code assets. There are some scenarios in which you might want to ship your code to us for analysis: open source projects, for example, where there is no concern about privacy of the code. I’ll discuss these scenarios in more detail in future blog posts.

Can code analysis be used as a gate on checkins? That is, can we say that if analysis finds a new defect then the checkin is rejected?

Yes. The traditional model is to run the analyzer nightly and discover new defects checked in that day, but some customers want stricter gates on their checkins. We have seen that pretty much every such customer has their own custom-built system for gating checkins, so it is a bit tricky to do this “out of the box”. However we have a lot of experience helping customers integrate the analyzer into their systems.

Does the analyzer work well with “inversion of control”, “dependency injection”, “event driven programming” and similar patterns?

It works reasonably well. Many of these design patterns intend to facilitate decoupled systems, where two pieces of code communicate over an abstract but well-defined boundary that hides the implementation details. Living in a decoupled world means that you have some uncertainty about what code you’re actually calling; if you can’t figure out what code you’re calling then the static analyzer is going to have difficulty as well. That said, the analyzer has excellent heuristics for making guesses about what code is going to actually run when you invoke a method on an interface, virtual method or delegate.

Do statistical checkers consider statistics deduced from libraries?

I mentioned in my talk that some of the “checkers” — the algorithms which attempt to find defects in source code — use statistical rather than logical methods to deduce a potential defect. For example, if we see that 19 times out of 20, a method override void M() calls base.M(), then the 20th time is likely to be a mistake. If we see that 19 times out of 20, a field is accessed under a particular lock, then the 20th time is likely to be a mistake. And so on.

Yes, statistics are deduced from library code where you only have the assembly, not the source code. However, defects are not reported in library code, because how would you fix them? You don’t have the source code.

Is there more than just finding defects?

In my webcast I talked almost entirely about the analyzer, because that’s what I work on. Finding great defects is definately the heart of the product, but finding the defects is just the beginning. We’ve got sophisticated tools for managing new and legacy defects, deducing what code is not adequately tested, deducing what tests need to be run on a given change, reporting on progress against quality goals, detecting violations of quality metrics policies, and so on.

Analyzing test code

Continuing with my series of answers to questions that were asked during my webcast last week…

Do the “checkers” (algorithms that find specific defect patterns) find defects in unit testing code?

If you want them to, yes.

First off, as I described in my talk, we deduce which code you want analyzed by monitoring the process that makes a clean build: by watching every invocation of the C# compiler during a clean build we know precisely which source code is being compiled, which assemblies are being referenced, and so on. If you don’t want your test code analyzed, don’t build it while the build capture system is running, and it won’t be analyzed. If you do want your test code analyzed, do build it.

Second, we have a feature called “Components” which allows you to separate defects sources into different logical groupings. This way in our defect presentation tool you can say “I only want to see defects found in the testing component”, and so on.

There are definitely some subtleties to consider when looking for defects in test code. For example, if you have a test which contains:

while(i++ < 10);

then that test has a genuine “stray semicolon” defect. (Of course the C# compiler will note this one as well, but that’s not germane to my point. The point is that this is clearly a case where the test is not behaving as desired and an analysis tool can determine that.)

By contrast, I’ve seen test code like this:


The static analyzer will likely report that Bar is being called improperly, but that’s the whole point. It can be tricky to detect that this is an “intentional defect” and suppress it. Testing code tends to have a high concentration of intentional defects.

Copy-paste defects

Continuing with my series of answers to questions that were asked during my webcast on Tuesday:

The copy-paste checker example you showed was interesting. I’ve heard that NASA disallows copy-pasting in code because it is so error prone; is this true?

For readers who did not attend the talk: my favourite Coverity checker looks for code where you cut some code from one place, pasted it in another, and then made a series of almost but not quite consistent edits. An example taken from real world code is: Continue reading

Analysis vs code review

Thanks to everyone who came out to my “webinar” talk today; we had an excellent turnout. Apologies for the problems with the slides; there is some performance issue in the system where it works fine when it is not under load, but when there are lots of people using it, the slides do not advance as fast as they should. Hopefully the hosting service will get it sorted out.

As I mentioned last time, the recording will be edited and posted on the Coverity blog; I’ll post a link when I have one.

We got far, far more questions from users than we could possibly answer in the few minutes we had left at the end, and far too many to fit into one reasonably-sized blog post, so I’m going to split them up over the next few episodes. Today:

What percentage of defects does the Coverity analyzer find that should have been caught by code review? Continue reading

Avoiding C# defects talk Tuesday

Hello all, I have been crazy busy these last few weeks either traveling for work or actually programming with Roslyn — woo hoo! — and have not had time to blog. I’ve been meaning to do a short tour of the Roslyn codebase, now that it is open-sourced, but that will have to wait for later this summer.

Today I just want to mention that tomorrow, July 15th, at 8:30 AM Pacific Daylight Time, I’ll be doing a live talk broadcast on the internet where I’ll describe how the Coverity static analyzer works and what some of the most common defect patterns we find are. In particular I’m very excited by a new concurrency issue checker that looks for incorrect implementations of double-checked locking, and other “I avoided a lock when I should not have” defects. My colleague Kristen will also be talking about the new “desktop” mode of the analyzer.

If you’re interested, please register beforehand at this link. Thanks to Visual Studio Magazine for sponsoring this event.

If you missed it: the webcast will be recorded and the recording will be posted on the Coverity blog in a couple of days. The recording will also be posted on the Visual Studio Magazine site link above for 90 days.