Earlier on FAIC I asked for code that parses as an expression that produces different results for
s = s + expr;
s += expr;
This is a pretty easy puzzle; the answers posted in the comments could largely be grouped into two buckets. The first, which is a bit silly, are expressions which always produce a different value, so of course they produce different results in those two cases.
s = s + Guid.NewGuid();
produces a different result than
s += Guid.NewGuid();
but then again, it also produces different results every time you call
s += Guid.NewGuid();
so that's not a particularly interesting answer.
Happy Canada Day everyone! I'm back from a fabulous week in California and looking forward to taking a break from giving PowerPoint presentations.
Today, to follow up on my recent series on string concatenation, here's a fairly easy little puzzle. I have a perfectly ordinary local variable:
string s = "";
Can you come up with some code that parses as a legal expression such that the statements
s = s + your_expression;
s += your_expression;
are both legal but produce completely different results in
s? Post your proposals in the comments and I'll give my answer later this week.
Last time I challenged you to find a value which does not round correctly using the algorithm
Math.Floor(value + 0.5)
The value which does not round correctly is the double
0.49999999999999994, which is the largest double that is smaller than
0.5. With the given algorithm this rounds up to
1.0, even though clearly
0.49999999999999994 is less than one half, and therefore should round down.
What the heck is going on here?
The intention of this method is to round a double to the nearest integer. If the double is exactly half way between two integers then it rounds to the larger of the two possibilities:
static double MyRound(double d)
return Math.Floor(d + 0.5);
Is it correct? Can you find a value for which it does not give the mathematically correct value?
UPDATE: The answer is in the comments, so if you don't want spoilers, don't read the comments.
Next time on FAIC: The answer, of course.
I had a great time hanging out with my colleagues Bob and Amie yesterday at the HUB, talking with students, spotting defects and handing out yo-yos. Thanks to all who came out, and to the SWE for putting on a great event.
To follow up on the puzzle I posted yesterday: the terrible flaw, which most people spotted right away, was that the expression
geteuid != 0 was of course intended to be
geteuid() != 0. The code as written compares the function pointer to null, which it never is, and therefore the right side is always true, and therefore the conditional falls into the "fail with an error" branch more often than it ought to. The program succeeds if the user really is root, or if they are "sudo" root. It is intended to succeed also if the user is "effectively" root, but it does not. Thank goodness in this case the program fails to a secure mode! It is not at all difficult to imagine a situation where such an accidental function pointer usage causes the program to fail into the insecure mode. In any event, Coverity's checker catches this one. (And of course more modern languages like C# do not allow you to use methods in a context other than a call or delegate conversion.)
There are of course any number of other flaws in this fragment. First, it's now considered bad form to check for root like this; rather, check to see if the user is granted an appropriate permission. Second, the code is hard to read if you do not know the convention that the root user gets magical id zero by default; the code could be much more self-documenting. And so on; several people made good observations in the comments.
Next time on FAIC: You can build a hollow house out of solid bricks, and you can build a deadlocking program out of threadsafe methods too.
Today, a puzzle for you.
We've been talking about how the Roslyn C# compiler aggressively optimizes nested lifted unary operators and conversions by using a clever technique. The compiler realizes the inner operation as a conditional expression with a non-null nullable value on the consequence branch and a null nullable value on the alternative branch, distributes the outer operation to each branch, and then optimizes the branches independently. That then gives a conditional expression that can itself be the target of further optimizations if the nesting is deeper.
This works great for lifted conversions and unary operators. Does it also work for binary operators? It seems like it would be a lot harder to make this optimization work for a lifted binary operator where both operands are themselves lifted operations. But what if just one of the operands was a lifted operation, and the other operand was guaranteed to be non-null? There might be an opportunity to optimize such an expression. Let's try it. Suppose
Y() are expressions of type
int? and that
Z() is an expression of type
int? r = X() * Y() + Z();
We know from our previous episodes that operator overload resolution is going to choose lifted multiplication for the inner subexpression, and lifted addition for the outer subexpression. We know that the right operand of the lifted addition will be treated as though it was
new int?(Z()), but we can optimize away the unnecessary conversion to
int?. So the question is can the C# compiler legally code-generate that as though the user had written:
int? tempX = X();
int? tempY = Y();
int tempZ = Z();
r = tempX.HasValue & tempY.HasValue ?
new int?(tempX.GetValueOrDefault() * tempY.GetValueOrDefault() + tempZ) :
If you think the answer is "yes" then the follow-up question is: can the C# compiler legally make such an optimization for all nullable value types that have lifted addition and multiplication operators?
If you think the answer is "no" then the follow-up questions are: why not? and is there any scenario where this sort of optimization is valid?
Next time on FAIC we'll be kind to our fine feathered friends; after that, we'll find out the answer to today's question.
Eric is crazy busy at Coverity's head office; this posting was pre-recorded.
As I said last time, that was a pretty easy puzzle. The solution is: either
FooBar or the type of local variable
x can be a type parameter:
It is possible for a program with some local variable
bool b = x is FooBar;
to assign true to
b at runtime, even though there is no conversion, implicit or explicit, from
FooBar allowed by the compiler! That is to say that
FooBar foobar = (FooBar)x;
would not be allowed by the compiler in that same program.
Can you create a program to demonstrate this fact?
This is not a particularly hard puzzle but it does illustrate some of the subtleties of the
is operator that we'll discuss in the next episode.
Last time I asked if you could find the bug in the original version of my histogram code. Here's how I found it:
The first time I ran my histogram visualizer I asked for a Cauchy distribution with a minimum of -10 and a maximum of 10, and of course I got a graph that looks much like the one from my article of last week:
Looks perfectly reasonable; I guess my program is correct right out of the gate, because I am that awesome!
Then I went to make a graph of a uniform distribution with a minimum of zero and a maximum of one, but I forgot to update the actual query; it still gave me a Cauchy distribution. Here's that same Cauchy distribution this time graphed only from 0 to 1. Oh, the pain:
Which is obviously neither uniform nor Cauchy. Equally obvious: I am not sufficiently awesome to write a twenty-line program without a trivial floating point bug the first time.
The bug, which is very subtle in the first graph, was now obvious: the calculation to determine what the count is for the leftmost bucket is wrong. Why? Because converting a double to an integer simply discards the fractional part, effectively truncating towards zero, and "towards zero" is not downwards if any datum is negative. That means that the leftmost bucket got everything that was supposed to be in it, and everything that was supposed to be in the bucket to its left as well! The solution is either to take the floor of the number before turning it into an int, or to check to see if the double is in the right range before truncating it, not after.
The original version of the histogram-generating code that I whipped up for the previous episode of FAIC contained a subtle bug. Can you spot it without going back and reading the corrected code?
private static int CreateHistogram(
int results = new int[buckets];
double multiplier = buckets / (max - min);
foreach (double datum in data)
int index = (int) ((datum - min) * multiplier);
if (0 <= index && index < buckets)
results[index] += 1;
Note that of course if this were production code, instead of demo code I whipped up in five minutes, it would be a lot more robust in its error detection; the bug that I am looking for is a bona fide error in the logic of the method, rather than things like "the method does not verify that min is smaller than max", and so on.
A hint: the first time I ran this code and displayed the results, the generated histogram looked fine. Then I made a small change to the arguments and the resulting histogram image was obviously wrong. Can you spot the defect?
Next time on FAIC: The solution.