A question I occasionally get is, suppose I have code like this:

const double x = 200.0; const double y = 0.5; ... void M(double z) { double r = z * x * y; ...

Does the compiler generate code as though you’d written `z * 100.0`

?

Continue reading

20

A question I occasionally get is, suppose I have code like this:

const double x = 200.0; const double y = 0.5; ... void M(double z) { double r = z * x * y; ...

Does the compiler generate code as though you’d written `z * 100.0`

?

Continue reading

One of the C# oddities I noted in my recent article was that I find it odd that creating a numeric type with less-than, greater-than, and similar operators requires implementing a lot of redundant methods, methods whose values could be deduced by simply implementing a comparator. For an example of such, see my previous series of articles on implementing math from scratch.

A number of people commented that my scheme does not work in a world with nullable arithmetic (or, similarly, NaN semantics, which are similar enough that I’m not going to call out the subtle differences here.) That reminded me that I’d been intending for some time to point out that when it comes to comparison operators, nullable arithmetic is deeply weird. Check out this little program: Continue reading

This is a sequel to my 2009 post about division of long integers.

I am occasionally asked why this code produces a bizarre error message:

Console.WriteLine(Math.Round(i / 6000000000, 5));

Where `i`

is an integer.

The error is:

The call is ambiguous between the following methods: 'System.Math.Round(double, int)' and 'System.Math.Round(decimal, int)'

Um, what the heck? Continue reading

Last time I explained why the designers of C# wanted to have both checked and unchecked arithmetic in C#: unchecked arithmetic is fast and dangerous, checked arithmetic is slightly slower but turns subtle, easy-to-miss mistakes into program-crashing exceptions. It seems clear why there is a “checked” keyword in C#, but since unchecked arithmetic is the default, why is there an “unchecked” keyword?

One of the primary design goals of C# in the early days was to be familiar to C and C++ programmers, while eliminating many of the “gotchas” of C and C++. It is interesting to see what different choices were possible when trying to reduce the dangers of certain idioms while still retaining both familiarity and performance. I thought I’d talk a bit about one of those today, namely, how integer arithmetic works in C#.

Continue reading

The nice people at DevProConnections asked me to write a beginner-level article for them about the perils of arithmetic in C#; of course these same perils apply to C, C++, Java and many, many other languages. Check it out!

(This is a follow-up article to my post on generating random data that conforms to a given distribution; you might want to read it first.)

Here’s an interesting question I was pondering last week. The .NET base class library has a method `Random.Next(int)`

which gives you a pseudo-random integer greater than or equal to zero, and less than the argument. By contrast, the `rand()`

method in the standard C library returns a random integer between 0 and `RAND_MAX`

, which is usually 32768. A common technique for generating random numbers in a particular range is to use the remainder operator:

int value = rand() % range;

However, this almost always introduces some bias that causes the distribution to stop being uniform. Do you see why?

Continue reading

Last time we showed how you can take any two coprime positive integers `x`

and `m`

and compute a third positive integer `y`

with the property that `(x * y) % m == 1`

, and therefore that `(x * z * y) % m == z % m `

for any positive integer `z`

. That is, there always exists a “multiplicative inverse”, that “undoes” the results of multiplying by `x`

modulo `m`

. That’s pretty neat, but of what use is it?

Continue reading

Here’s an interesting question: what is the behavior of this infinite sequence? WOLOG let’s make the simplifying assumption that `x`

and `m`

are greater than zero and that `x`

is smaller than `m`

. Continue reading

Now that we have an integer library, let’s see if we can do some 2300-year-old mathematics. The Euclidean Algorithm as you probably recall dates from at least 300 BC and determines the greatest common divisor (GCD) of two natural numbers. The Extended Euclidean Algorithm solves Bézout’s Identity: given non-negative integers `a`

and `b`

, it finds integers `x`

and `y`

such that:

a * x + b * y == d

where `d`

is the greatest common divisor of `a`

and `b`

. Obviously if we can write an algorithm that finds `x`

and `y`

then we can get `d`

, and `x`

and `y`

have many useful properties themselves that we’ll explore in future episodes.