(This is a follow-up article to my post on generating random data that conforms to a given distribution; you might want to read it first.)

Here’s an interesting question I was pondering last week. The .NET base class library has a method `Random.Next(int)`

which gives you a pseudo-random integer greater than or equal to zero, and less than the argument. By contrast, the `rand()`

method in the standard C library returns a random integer between 0 and `RAND_MAX`

, which is usually 32768. A common technique for generating random numbers in a particular range is to use the remainder operator:

int value = rand() % range;

However, this almost always introduces some bias that causes the distribution to stop being uniform. Do you see why?

Continue reading