# Fixing Random, part 9

Last time on FAIC I sketched out the “alias method”, which enables us to implement sampling from a weighted discrete distribution in constant time. The implementation is slightly tricky, but we’ll go through it carefully.

The idea here is:

• If we have n weights then we’re going to make n distributions.
• Every distribution is either a singleton that returns a particular number, or we use a projection on a Bernoulli to choose between two numbers.
• To sample, we uniformly choose from our n distributions, and then we sample from that distribution, so each call to Sample on the weighted integer distribution does exactly two samples of other, simpler distributions.

How are we going to implement it? The key insight is: weights which are lower than the average weight “borrow” from weights which are higher than average.

Call the sum of all weights s. We start by multiplying each weight by n, which gives us s * n “weight points” to allocate; each of the n individual distributions will have a total weight of s points.

private WeightedInteger(IEnumerable<int> weights)
{
this.weights = weights.ToList();
int s = this.weights.Sum();
int n = this.weights.Count;
this.distributions = new IDistribution<int>[n];

All right, I am going to keep track of two sets of “weight points”: elements where the unallocated “weight points” are lower than s, and elements where it is higher than s. If we’ve got a weight that is exactly equal to the average weight then we can just make a singleton.

var lows = new Dictionary<int, int>();
var highs = new Dictionary<int, int>();
for(int i = 0; i < n; i += 1)
{
int w = this.weights[i] * n;
if (w == s)
this.distributions[i] = Singleton<int>.Distribution(i);
else if (w < s)
else
}

If every element’s weight was average, then we’re done; the dictionaries are empty. (And we should not have created a weighted integer distribution in the first place!)

If not, then some of them must have been above average and some of them must have been below average. This isn’t Lake Wobegon; not everyone is above average. Therefore iff we have more work to do then there must be at least one entry in both dictionaries.

What we’re going to do is choose (based on no criteria whatsoever) one “low” and one “high”, and we’re going to transfer points from the high to the low. We’ll start by grabbing the relevant key-value pairs and removing them from their respective dictionaries.

while(lows.Any())
{
var low = lows.First();
lows.Remove(low.Key);
var high = highs.First();
highs.Remove(high.Key);

The high has more than s points, and the low has less than s points; even if the low has zero points, the most we will transfer out of the high is s, so the high will still have some points when we’re done. The low will be full up, and we can make a distribution for it:

int lowNeeds = s  low.Value;
this.distributions[low.Key] =
Bernoulli.Distribution(low.Value, lowNeeds)
.Select(x => x == 0 ? low.Key : high.Key);

Even if the low column value is zero, we’re fine; Bernoulli.Distribution will return a Singleton. That’s exactly what we want; there is zero chance of getting the low column number.

If the low is not zero-weighted, then we want the low-to-high odds to be in the ratio of the low points to the stolen high points, such that their sum is s; remember, we have n*s points to distribute, and each row must get s of them.

We just filled in the distribution for the “low” element. However, the “high” element still has points, remember. There are three possible situations:

1. the high element has exactly s points remaining, in which case it can become a singleton distribution
2. the high element has more than s points, so we can put it back in the high dictionary
3. the high element has one or more points, but fewer than s points, so it goes in the low dictionary

int newHigh = high.Value – lowNeeds;
if (newHigh == s)
this.distributions[high.Key] =
Singleton<int>.Distribution(high.Key);
else if (newHigh < s)
lows[high.Key] = newHigh;
else
highs[high.Key] = newHigh;
}
}

And we’re done.

• Every time through the loop we remove one element from both dictionaries, and then we add back either zero or one elements, so our net progress is either one or two distributions per iteration.
• Every time through the loop we account for either s or 2s points and they are removed from their dictionaries, and we maintain the invariant that the low dictionary is all the elements with fewer than s points, and the high dictionary is all those with more.
• Therefore we will never be in a situation where there are lows left but no highs, or vice versa; they’ll run out at the same time.

We’ve spent O(n) time (and space) computing the distributions; we can now spend O(1) time sampling. First we uniformly choose a distribution, and then we sample from it, so we have a grand total of two samples, and both are cheap.

public int Sample()
{
int i = SDU.Distribution(0, weights.Count  1).Sample();
return distributions[i].Sample();
}

Aside: I’ve just committed the sin I mentioned a few episodes ago, of sampling from an array by emphasizing the mechanism in the code rather than building a distribution. Of course we could have made a “meta distribution”  in the constructor with distributions.ToUniform() and then this code would be the delightful

public int Sample() => this.meta.Sample().Sample();

What do you think? is it better in this case to emphasize the mechanism, since we’re in mechanism code, or do the double-sample?

We’ll see in a later episode another elegant way to represent this double-sampling operation, so stay tuned.

Let’s try an example with zeros:

WeightedInteger.Distribution(10, 0, 0, 11, 5).Histogram()

Sure enough:

```0|************************************
3|****************************************
4|******************```

The alias method is easy to implement and cheap to execute; I quite like it.

Remember, the whole point of this thing is to stop messing around with System.Random when you need randomness. We can now take any set of integer weights and produce a random distribution that conforms to those weights; this is really useful in games, simulations, and so on.

Exercise:  I mentioned a long time ago that I wanted to do all my weights in integers rather than doubles. It is certainly possible to make a weighted integer distribution where the weights are doubles; doing so is left as an exercise.

Be warned: this is a slightly tricky exercise. You have to be very careful when implementing the alias method in doubles because the algorithm I gave depends for its correctness on two properties: (1) every time through the loop, there are exactly s or 2s points removed from the total in the dictionaries, and (2) we are never in a situation where the “high” dictionary still has elements but the “low” dictionary does not, or vice versa.

Those conditions are easily seen to be met when all the quantities involved are small integers. Both those conditions are easily violated because of rounding errors when using doubles, so be extremely careful if you try to implement this using doubles.

Coming up on FAIC: there are (at least) two ways to apply “conditional” probabilistic reasoning; we’ll explore both of them.

# Fixing Random, part 8

Last time on FAIC we sketched out an O(log n) algorithm for sampling from a weighted distribution of integers by implementing a discrete variation on the “inverse transform” method where the “inverse” step involves a binary search.

Can we do better than O(log n)? Yes — but we’ll need an entirely different method. I’ll sketch out two methods, and implement one this time, and the other in the next exciting episode.

Again, let’s suppose we have some weights, say 10, 11, 5. We have three possible results, and the highest weight is 11. Let’s construct a 3 by 11 rectangle that looks like our ideal histogram; I’ll put dashes in to more clearly indicate the “spaces”:

```0|**********-
1|***********
2|*****------
```

Here’s an algorithm for sampling from this distribution:

• Uniformly choose a random row and column in the rectangle; it’s easy to choose a number from 0 to 2 for the row and a number from 0 to 10 for the column.
• If that point is a *, then the sample is the generated row number.
• If the point is a , try again.

Basically, we’re throwing darts at the rectangle, and the likelihood of hitting a valid point on particular row is proportional to the probability of that row.

In case that’s not clear, let’s work out the probabilities. To throw our dart, first we’ll pick a row, uniformly, and then we’ll pick a point in that row, uniformly. We have a 1/3 * 10/11 = 10/33 chance of hitting a star from row 0, an  1/3 * 11/11 = 11/33 chance of hitting a star from row 1, a 1/3 * 5/11 = 5/33 chance of hitting a star from row 2, and that leaves a 7/33 chance of going again. We will eventually pick a *, and the values generated will conform to the desired distribution.

Let’s implement it. We’ll throw away our previous attempt and start over. (No pun intended.)

private WeightedInteger(List<int> weights)
{
this.weights = weights;
this.distributions =
new List<IDistribution<int>>(weights.Count);
int max = weights.Max();
foreach(int w in weights)
}

All right, we have three distributions; in each, a zero is success and a one is failure. In our example of weights 10, 11, 5, the first distribution is “10 to 1 odds of success”, the second is “always success”, and the third is “5 to 6 odds of success”. And now we sample in a loop until we succeed. We uniformly choose a distribution, and then sample from that distribution until we get success.

public int Sample()
{
var rows = SDU.Distribution(0, weights.Count  1);
while (true)
{
int row = rows.Sample();
if (distributions[row].Sample() == 0)
return row;
}
}

We do two samples per loop iteration; how many iterations are we likely to do? In our example we’ll get a result on the first iteration 26/33 of the time, because we have 26 hits out of 33 possibilities. That’s 79% of the time. We get a result after one or two iterations 95% of the time. The vast majority of the time we are going to get a result in just a handful of iterations.

But now let’s think again about pathological cases. Remember our distribution from last time that had 1000 weights: 1, 1, 1, 1, …, 1, 1001. Consider what that histogram looks like.

```   0|*------...---  (1 star, 1000 dashes)
1|*------...---  (1 star, 1000 dashes)
[...]
998|*------...---  (1 star, 1000 dashes)
999|*******...***  (1001 stars, 0 dashes)```

Our first example has 26 stars and 6 dashes. In our pathological example there will be 2000 stars and 999000 dashes, so the probability of exiting the loop on any particular iteration is about one in 500. We are typically going to loop hundreds of times in this scenario! This is far worse than our O(log n) option in pathological cases.

This algorithm is called “rejection sampling” (because we “reject” the samples that do not hit a *) and it works very well if all the weights are close to the maximum weight, but it works extremely poorly if there is a small number of high-weight outputs and a large number of low-weight outputs.

Fortunately there is a way to fix that problem. I’m going to reject rejection sampling, and move on to our third and final technique.

Let’s re-draw our original histogram, but I’m going to make two changes. First, instead of stars I’m going to fill in the number sampled, and I’m going to make it a 33 by 3 rectangle, and triple the size of every row.

```0|000000000000000000000000000000---
1|111111111111111111111111111111111
2|222222222222222------------------
```

Plainly this is logically no different; we could “throw a dart”, and the number that we hit is the sample; if we hit a dash, we go again. But we still have the problem that 21 out of 99 times we’re going to hit a dash.

My goal is to get rid of all the dashes, but I’m going to start by trying to get 7 dashes in each row. There are 21 dashes available, three rows, so that’s seven in each row.

To achieve that, I’m going to first pick an “excessive” row (too many numbers, too few dashes) and “deficient row” (too few numbers, too many dashes) and move some of the numbers from the excessive row to the deficient row, such that the deficient row now has exactly seven dashes. For example, I’ll move eleven of the 0s into the 2 row, and swap eleven of the 2 row’s dashes into the 0 row.

```0|0000000000000000000--------------
1|111111111111111111111111111111111
2|22222222222222200000000000-------
```

We’ve achieved our goal for the 2 row. Now we do the same thing again. I’m going to move seven of the 1s into the 0 row:

```0|00000000000000000001111111-------
1|11111111111111111111111111-------
2|22222222222222200000000000-------
```

We’ve achieved our goal. And now we can get rid of the dashes without changing the distribution:

```0|00000000000000000001111111
1|11111111111111111111111111
2|22222222222222200000000000
```

Now we can throw darts and never get any rejections!

The result is a graph where we can again, pick a column, and then from that column sample which result we want; there are only ever one or two possibilities in a column, so each column can be represented by a Bernoulli or Singleton distribution. Therefore we can sample from this distribution by doing two samples: a uniform integer to pick the row, and the second one from the distribution associated with that row.

Aside: You might wonder why I chose to move eleven 0s and then seven 1s. I didn’t have to! I also could have moved eleven 1s into the 2 row, and then four 0s into the 1 row. Doesn’t matter; either would work. As we’ll see in the next episode, as long as you make progress towards a solution, you’ll find a solution.

This algorithm is called the “alias method” because some portion of each row is “aliased” to another row. It’s maybe not entirely clear how to implement this algorithm but it’s quite straightforward if you are careful.

Next time on FAIC: The alias method implementation.

# Fixing Random, part 7

A few episodes back I made a version of the Bernoulli distribution that took integer “odds”. That is, the distribution randomly produces either zero or one, and we can say “produce on average 23 zeros for every 17 ones” or whatever.

An obvious extension of this would be to produce 0, 1, or 2 randomly, similarly with some ratios: say, average 10 zeros, 11 ones, 5 twos out of each 26 samples in the long run. Of course, there’s no need to stop at three outputs; basically, we want the distribution of any number from 0 to n, with any integer ratios between them. This is a non-uniform discrete distribution; if a Bernoulli distribution is flipping an unfair coin, this is rolling an unfair n-sided die.

We’ve seen that we can do that by making an array with 10 zeros, 11 ones and 5 twos, and then calling ToUniform on that, but what if we wanted weights that were 437:563:501:499 or something? It seems wasteful to make an array with two thousand elements just to solve this problem.

Let’s try to write the code to solve this problem without thinking too hard about it, and see where we get stuck:

public sealed class WeightedInteger : IDiscreteDistribution<int>
{
public static IDiscreteDistribution<int> Distribution(
params int[] weights) =>
Distribution((IEnumerable<int>)weights);
public static IDiscreteDistribution<int> Distribution(
IEnumerable<int> weights)
{
List<int> w = weights.ToList();
if (w.Any(x => < 0) || !w.Any(x => x > 0))
throw new ArgumentException();
if (w.Count == 1)
return Singleton<int>.Distribution(0);
if (w.Count == 2)
return Bernoulli.Distribution(w, w);
return new WeightedInteger(w);
}
private WeightedInteger(List<int> weights)
{

this.weights = weights;
}
public IEnumerable<int> Support() =>
Enumerable.Range(0, weights.Count).
Where(x => weights[x] != 0);
public int Weight(int i) =>
0 <= i && i < weights.Count ? weights[i] : 0

No surprises there! This is all straightforward boilerplate code like we’ve seen before.

Exercise: There are a few places where we could be a little smarter; for instance, if all the weights are zero except one, then we could make a singleton distribution for that case as well. If the last weight is zero, we can discard that weight. Give it a shot.

The interesting question is: how do we implement Sample? It is by no means obvious, at least, not to me. Over the next few episodes, we’ll look at a number of techniques for solving this problem. Today: the “inverse” technique.

Long-time readers of my blog might at this point be remembering my 2012 article on the “inverse transform sampling” technique: “how do you generate a non-uniform random number when given a standard uniform distribution?” In brief:

• Get the probability distribution function.
• Integrate it to get the cumulative distribution — that is the function where c(x) is the probability that a sample is less than x.
• The cumulative distribution is monotone nondecreasing and always between 0 and 1, so it has an inverse which we call the quantile function. Find it.
• If you sample from a standard continuous uniform distribution and then apply the quantile function to the result, the sample will conform to the desired distribution.

The examples I gave in that article were continuous distributions, but this technique works equally well with discrete distributions. (For discrete quantities we call the “probability distribution” function the “probability mass” function, for no particularly good reason I’m aware of.)  In our case today we can simply treat the mass function as a non-normalized distribution. Here’s the non-normalized mass function for weights 10, 11, 5. • Since the mass function is just a bunch of integer weights for integer values, the cumulative distribution function is a monotone step function, also from integers to integers. Here’s the cumulative distribution, which is 10, 21, 26.  (That is, 10, 10+11, 10+11+5.) • The quantile function is the inverse of that function, which is another step function from integers to integers. Here’s the quantile function: • We just evaluate that function at a uniformly random point, and we’re done.  That is, we pick a uniform integer from 1 to 26, and look at the quantile function. If it is 1 through 10, we choose zero. If it is 11 through 21, we choose one, and if it is 22 through 26, we choose 2.

We get a number out from 0, 1 and 2 at the ratios of 10:11:5, as desired. And we did everything in integers; no pesky double arithmetic to get in the way. Are we done?

Sadly, no; the question is thoroughly begged! We already knew how to do that; we just create an array with ten zeros, eleven ones and five twos and sample from it. We’re trying to avoid that solution.

But maybe we can use these ideas to come up with a more efficient way to do the same thing.

The first thing we’ll do is compute the cumulative distribution in the constructor:

private WeightedInteger(List<int> weights)
{
this.weights = weights;
this.cumulative = new List<int>(weights.Count);
for (int i = 1; i < weights.Count; i += 1)
}

So, if we start with weights 10, 11, 5, we have cumulative weights 10, 21, 26.

Again, remember what this means. Based on the weights, the probability of getting 0 is 10/26, the probability of getting 1 is 11/26, and the probability of getting 2 is 5/26.  Therefore the probability of getting 0 or less is 10/26, the probability of getting 1 or less is 21/26, and the probability of getting 2 or less is 26/26. That’s how we compute the weights of the cumulative distribution.

Now we need to invert that cumulative distribution function. That is the function q(x) where q(1 to 10) = 0, q(11 to 21) = 1, q(22 to 26) = 2. We need to generate a random integer, and then compute this function, but how do we compute this function?

We generate a random number from 1 to 26, and then search the cumulative list for it. We’ll do a linear search for the leftmost bucket where the cumulative weight is not less than the sample:

public int Sample()
{
int total = cumulative[cumulative.Count  1];
int uniform = SDU.Distribution(1, total).Sample();
int result = 0;
while (cumulative[result] < uniform)
result += 1;
return result;
}

You can verify for yourself that if we sample 1-10 we get 0, if we sample 11-21 we get 1, and if we sample 22-26 we get 2.

Is that it? Can it really be that easy? I think it can, but let’s try it out:

WeightedInteger.Distribution(10, 11, 5).Histogram()

```0|************************************
1|****************************************
2|******************```

Looking good!

Obviously it is O(n) to compute the cumulative weights, but we only have to do that once, in the constructor. The more worrisome problem is that searching for the result is O(n) in the number of weights.

If the number of weights is three or five or twenty, it’s probably not a problem. But if it’s a hundred or a thousand, a linear search is going to be problematic. (Then again, if it is more than a thousand, our assumption that discrete distributions have a small support is probably going out the window too.)

But of course we know how to improve the search time when we’re looking for an item in a sorted list; we can binary search it! As the documentation notes, the BinarySearch method of List returns “the zero-based index of `item` in the sorted list, if `item` is found; otherwise, a negative number that is the bitwise complement of the index of the next element that is larger than `item`” which is exactly what we want.

• If we generate 10, 21 or 26 then we get the result we want immediately: 0, 1 or 2.
• If we generate 1 through 9, we get ~0, because cumulative is larger
• similarly 11 through 20 gives us ~1
• similarly 22 through 25 gives us ~2

But ~ is its own inverse, so we can just ~ the result if negative and get the answer we want. Let’s implement it:

public int Sample()
{
int total = cumulative[cumulative.Count  1];
int uniform = SDU.Distribution(1, total).Sample();
int result = cumulative.BinarySearch(uniform);
return result >= 0 ? result : ~result;

}

Seems legit.

Right?

Hmm.

Exercise: There’s a bug in the code above. Do you see it?

Give it some thought and then scroll down.

.

.

.

.

.

.

.

.

.

Since we are no longer looking “from the left”, we can accidentally choose a bucket that has a zero probability. For example, if our weights are 10, 0, 0, 11, 5, then the cumulative weights are 10, 10, 10, 21, 26. If 10 is our uniform sample then we must ensure that the binary search finds the leftmost 10 and says “0”, and not “1” or “2”.

There are a variety of techniques you could employ to solve this problem efficiently, and I’m not going to dig into them here. Give it a shot!

Let’s assume that we managed to solve our binary searching bug. This reduces our cost from O(n) to O(log n) which is very good, and for practical purposes, probably good enough. After all, with a thousand weights, that’s at most 10 comparisons, which should be pretty quick.

However, let’s think about some pathological cases.

Suppose we have a probability distribution with a thousand elements, distributed like this: 1, 1, 1, 1, 1, …. 1, 1001. The cumulative distribution will be 1, 2, 3, … 999, 2000. But  50% of the time on average we will uniformly generate a number greater than 999 and less than 2000, so 50% of the time our binary search will have to do all ten comparisons to discover that the rightmost bucket is the one we want. Can we do better?

Yes we can. Instead of building a list and binary searching that, we could build an optimal binary search tree and search that. We know ahead of time the probability of choosing each element in the binary search tree, so we can move the most common buckets towards the root of the tree, and then build a balanced binary search tree for the uncommon items in the leaves; there is an O(n2) algorithm for doing so, but I’m not going to show it here; see the Wikipedia article for details.

Alternatively, we could use a self-tuning binary tree such as a splay tree, and have it reorganize itself so that the most commonly searched for buckets appear near the root.

If we pursued this option of building a binary search tree, we could also prune zero-probability buckets from the tree entirely, eliminating the problem I mentioned above of accidentally finding a zero-probability bucket.

By using these techniques, we can get the average time spent sampling very low indeed, even for pathological cases, but it will still take longer to sample as the number of weights increases. (And we have a potentially difficult quadratic algorithm to build the optimized tree, though that only happens once.) Can we find a simple algorithm where the sample time is constant no matter how many weights there are?

Next time on FAIC: We’ll make two more attempts to find a more efficient sampling technique for this distribution.

# Fixing Random, part 6

Last time on FAIC I showed how we could implement more of the standard, straightforward probability distributions, like the Bernoulli distribution; the episode before I showed the standard discrete uniform distribution, also known as “roll an n-sided fair die”. But typically when we are working on a line-of-business program, we’re working on objects that are meaningful in that line of business, and not necessarily on integers. Just to pick a silly example, suppose I want to uniformly choose between a cat, a dog and a goldfish?

using SDU = StandardDiscreteUniform;

var
animals = new List<Animal>()
{ new Cat(), new Dog(), new Goldfish() };

Traditionally, as I noted a while back, we’d write some mechanism-heavy code like:

var choice = animals[random.Next(0, animals.Length)];

Now, I sincerely hope you’re not satisfied with:

var choice = animals[
SDU.Distribution(0, animals.Length-1).Sample()];

If I tried to push that nonsense on you, you’d be quite justified in concluding that this whole project of mine to motivate a better “Random” is a cure that is worse than the disease. As is often the case, the problem here is that we’ve failed to push the “mechanism” code into a helper class, so let’s do that:

// This is wrong!
public
sealed class DiscreteUniform<T> :
IDiscreteDistribution<T>
{
public static DiscreteUniform<T> Distribution(
IEnumerable<T> items) =>
new DiscreteUniform<T>(items);
private DiscreteUniform(IEnumerable<T> items)
{
this.support = items.ToList();
this.sdu SDU.Distribution(0support.Count  1);
}
public IEnumerable<T> Support() => support;
public T Sample() => support[sdu.Sample()];
public int Weight(T t) => this.support.Contains(t) ? 1 : 0;

public override string ToString() =>
\$”DiscreteUniform[{string.Join(“,”, support)}]”;
}

Exercise: That code is wrong in a number of ways; what are they? Give it some thought and then scroll down.

.

.

.

.

.

.

1. We do not handle the case where the sequence is empty, which should be an error, or a single item, which would be better represented as a Singleton<T>.
2. A buggy or hostile caller could call Support(), cast it to List<T>, and then mutate the list. Defense in depth!
3. Suppose the array contains two references to the same dog, and a cat and a fish. If we ask what the weight is for the cat, we get 1; if we ask what the weight of the dog is, we get 1, but the dog is twice as likely to be chosen.
4. Also in that scenario: the support contains the same dog twice. We expect the support to contain exactly one of each possible value.

We’ll fix all these problems in our final implementation at the bottom of this post.

Suppose we’ve managed to fix the problems identified above. We now create a helper method:

public static IDiscreteDistribution<T> ToUniform<T>(
this IEnumerable<T> items) =>
DiscreteUniform<T>.Distribution(items);

And now our proposed code is much nicer:

var animals = new List<Animal>()
{ new Cat(), new Dog(), new Goldfish() };
var choice = animals.ToUniform().Sample();

That looks much nicer, but… I still don’t like it.

Oh, sure, I like the call site, with the beautiful, readable fluent extension methods and whatnot. What I don’t like is the fact that I just built yet another single-purpose wrapper class. This wrapper class is only good for making a uniform distribution of an arbitrary sequence. Suppose I have a non-uniform distribution of integers from 0 to n — say, a binomial distribution — and I want to map a list of animals to that distribution; am I going to have to write a “binomial distribution on sequence” class that duplicates almost all the logic of the wrapper class above?

Exercise: Implement the binomial distribution; this is the extension of the Bernoulli distribution where, again, we have weights for a possibly-unfair coin, but we also have a parameter n which is the number of coin flips to make. The result is the total number of heads. Sampling is straightforward, but can you see how to compute the weights?

This seems like an opportunity for a more sophisticated wrapper class, parameterized in the underlying distribution on integers. Let’s tweak the class we just wrote:

public sealed class IntegerDistributionWrapper<T> :
IDiscreteDistribution<T>
{
public static IntegerDistributionWrapper<T> Distribution(
IDiscreteDistribution<int> d,
IEnumerable<T> items)
{

// blah blah blah
}
private IntegerDistributionWrapper(
IDiscreteDistribution<int> d,
IEnumerable
<T> items)
{
// you get the idea
}
public T Sample() => items[d.Sample()];
// and so on

I waved my hands a lot there because, though this is an improvement, I still don’t like it. We’re getting better, but I want more generality for less cost. What are we doing here really?

• Sampling an integer from a distribution.
• Projecting that integer onto a value taken from a list.

Why does the projection have to be “value taken from a list”? And for that matter, why does the value sampled from the underlying distribution have to be an integer to begin with? We can write almost the same code, but make it far more general:

public sealed class Projected<A, R> :
IDiscreteDistribution<R>
{
public static IDiscreteDistribution<R> Distribution(
IDiscreteDistribution<A> underlying,
Func<A, R> projection)
{
var result = new Projected<A, R>(underlying, projection);
if (result.Support().Count() == 1)
return Singleton<R>.Distribution(
result.Support().First());
return result;
}
private Projected(
IDiscreteDistribution<A> underlying,
Func<A, R> projection)
{
this.underlying = underlying;
this.projection = projection;
this.weights = underlying.Support().
GroupBy(
projection,
a => underlying.Weight(a)).
ToDictionary(g => g.Key, g => g.Sum());
}
public R Sample() => projection(underlying.Sample());
public IEnumerable<R> Support() => this.weights.Keys;
public int Weight(R r) =>
this.weights.GetValueOrDefault(r, 0);
}

Study that implementation and make sure it makes sense to you; the crux of the whole thing is in the constructor, where we compute the weights of the resulting distribution.

You’ve probably noticed that I’ve fixed all the problems identified above:

• We return a singleton when possible
• We don’t need to worry about the distribution being empty because presumably the underlying distribution is not empty
• The support set is an immutable deduplicated keyset
• We compute the weights up front.

Unfortunately this means O(support) extra storage, but I can live with that.

Exercise: there are a number of optimizations that could be made here for the common case where the number of “colliding” elements in the support is small or zero, to reduce the amount of extra storage. Try implementing some of them, see if you get a win.

Now that we have a projection wrapper, we can delete our discrete uniform wrapper and integer wrapper classes entirely. That means we have to rewrite our helper method:

public static IDiscreteDistribution<T> ToUniform<T>(
this IEnumerable<T> items)
{
var list = items.ToList();
return Projected<int, T>.Distribution(
SDU.Distribution(0, list.Count  1),
i => list[i]);
}

But again, this seems like I’m missing some generality.

Hmm.

Maybe I should also write an extension method on distributions that makes a projected distribution. It’s an easy helper method after all:

public static IDiscreteDistribution<R> MakeProjection<A, R>(
this IDiscreteDistribution<A> d,
Func<A, R> projection) =>
Projected<A, R>.Distribution(d, projection);

Wait a minute… that looks very familiar.

Scroll down when you see it.

.

.

.

.

.

.

.

Good heavens, we’ve re-invented Select. Again.

public static IDiscreteDistribution<R> Select<A, R>(
this IDiscreteDistribution<A> d,
Func<A, R> projection) =>
Projected<A, R>.Distribution(d, projection);

And now this helper can be rewritten into a call to Select:

public static IDiscreteDistribution<T> ToUniform<T>(
this IEnumerable<T> items)
{
var list = items.ToList();
return SDU.Distribution(0, list.Count  1)
.Select(i => list[i]);
}

But you know what this means: I have a Select method with the right signature, which means I can use my beloved query comprehension syntax! That last line could be

return
from i in SDU.Distribution(0, list.Count  1)
select list[i];

And now you know why, a few episodes back I said that I was not going to make IDistribution<T> an extension of IEnumerable<T>; doing so just causes confusion between the Select operation on distributions and the Select operation on sequences.

Aside: If you’ve been following along you’ve noticed that this implementation takes a list, then turns it into a list, and then turns it into a list again, and then turns it into a dictionary. This seems “not cheap”. And indeed it is not, but remember, this is pedagogical code, not industrial code. Were I writing this library for industry, I’d be optimizing it much more to avoid collection pressure.

The idea here is to sketch out what could be done, not exactly how we’d do it. When I think about performance in this series, I’m going to be concentrating on things like eliminating arbitrarily expensive loops, and not on pragmatic but mundane practicalities like eliminating sources of collection pressure.

Finally, let’s just make sure that everything is working as expected:

var cat = new Cat();
var dog = new Dog();
var fish = new Goldfish();
var animals = new List<Animal>() { cat, dog, dog, fish };
Console.WriteLine(animals.ToUniform().Histogram());
Console.WriteLine(animals.ToUniform().ShowWeights());

Sure enough, we get the correct distribution:

```     Cat|*******************
Dog|****************************************
Goldfish|*******************
Cat:1
Dog:2
Goldfish:1```

Interesting! We started by trying to make a uniform distribution and ended up with a correct non-uniformly-weighted distribution. Can we do a better job of building such a distribution without having to make an array that has duplicates?

Next time on FAIC: we’ve already seen how to represent a “fair die roll” distribution and a Bernoulli distribution where we give the “odds” that either zero or one happens as a ratio of integers; we’ll extend those concepts to any number of weighted outcomes, and discuss what data structure we can use to efficiently sample that distribution.

# Fixing Random, part 5

Last time on FAIC I implemented our first discrete distribution, the “choose an integer between min and max” distribution. I thought I might knock out a couple more of the easy discrete distributions. The easiest discrete distribution of all is of course the trivial distribution:

public sealed class Singleton<T> : IDiscreteDistribution<T>
{
public static Singleton<T> Distribution(T t) =>
new Singleton<T>(t);
private Singleton(T t) => this.t = t;
public T Sample() => t;
public IEnumerable<T> Support()
{
yield return t;
}
public int Weight(T t) =>
EqualityComparer<T>.Default
.Equals(this.t, t) ? 1 : 0;

public override string ToString() =>
\$”Singleton[{t}]”;
}

That is, the probability distribution where 100% of the time it returns the same value. You’ll also sometimes see distributions like this called the “Dirac delta” or the “Kronecker delta”, but we’re not doing quantum physics here, so I’m going to just call this the singleton distribution and be done with it.

Aside: I wish the Enumerable class had the corresponding method: take an item, return the sequence containing only that item. As we just saw, it’s easy to implement; because it is not in the framework, I’ve implemented it literally dozens of times! It is weird to me that the “extract the value of a singleton sequence” is a member of Enumerable, but the inverse operation is not.

Aside: This is the first distribution we’ve seen that does not necessarily have some sort of number as its output. You might be thinking that maybe our distribution type should never have been generic in the first place, if the only distributions I’m going to come up with are either numeric-valued or trivial. But worry not! In the next episodes we’ll see how we can naturally use more complex types in probability distributions.

Let’s look at another integer-valued distribution. The Bernoulli distribution considers the question “what if we flipped a possibly-unfair coin, and scored 1 for a head and 0 for a tail?” The parameter to the distribution is traditionally the probability of heads in our  coin, between 0.0 and 1.0. However, as I noted last time, at least for now I’m going to try to avoid double weights. Instead, we’ll think of this distribution as “odds”. Instead of saying “0.7 chance of getting a 1”, we’ll say “we get three zeros for every seven ones”:

public sealed class Bernoulli :
IDiscreteDistribution<int>
{
public static IDiscreteDistribution<int> Distribution(
int zero, int one)
{
if (zero < 0 || one < 0 || zero == 0 && one == 0)
throw new ArgumentException();
if (zero == 0) return Singleton<int>.Distribution(1);
if (one == 0) return Singleton<int>.Distribution(0);
return new Bernoulli(zero, one);
}
public int Zero { get; }
public int One { get; }
private Bernoulli(int zero, int one)
{
this.Zero = zero;
this.One = one;
}
public int Sample() =>
(SCU.Distribution.Sample() <=
((double)Zero) / (Zero + One)) ? 0 : 1;
public IEnumerable<int> Support() =>
Enumerable.Range(0, 2);
public int Weight(int x) => x == 0 ? Zero : x == 1 ? One : 0;

public override string ToString() =>
\$”Bernoulli[{this.Zero}{this.One}]”;
}

And sure enough, if we graph it out:

Bernoulli.Distribution(1, 3).Histogram()

We get what you’d expect: three times as many heads as tails:

```0|*************
1|****************************************
```

Next time on FAIC: We’ll stop making new implementations of classic probability distributions and look at how to extend the ones we’ve got.

# Exhausting vs exhaustive

We briefly interrupt my ongoing exploration of stochastic programming to point you at a new blog about pragmatic programming language design. My friend and erstwhile Roslyn colleague Anthony is once more living in Chicago and has just started blogging anew, beginning with the most exhaustingly exhaustive list of subtle differences between VB and C# I’ve ever seen. I’ve just started skimming it, and there is a lot to dig into here.

Getting resources like this on the internet — in-depth pieces written by experts describing the stuff not easily found by reading the documentation — is exactly why I started blogging in the first place. I’m looking forward to seeing what he has to say!

# Fixing Random, part 4

Last time on FAIC I showed how we could use a simple interface to cleanly implement concepts like “normal distribution” or “standard uniform distribution” in a natural style using fluent programming. For the next many episodes we’ll leave continuous distributions aside and consider only discrete distributions. That is, we have some probability distribution where there are only a small, fixed number of outcomes. Flip a coin, roll three dice and add them together, that sort of thing.

Since we assume the number of possible outcomes of a discrete distribution is small, we can enumerate them; the set of possible outcomes is, oddly enough, called the support of the distribution.

Some elements might be more likely than others; the ones that happen more often we say have more weight than the ones that happen less often. I want to be able to describe the weight of each member of a distribution easily, and I do not want to get off in the weeds of representing weights as doubles because double arithmetic has many pitfalls as we well know. For the purposes of this series I’m going to assume that we can weight the members of a distribution by an integer. (If I get around to it I might discuss double-weighted distributions later on.)

public interface IDiscreteDistribution<T> : IDistribution<T>
{
IEnumerable<T> Support();
int Weight(T t);
}

I’m going to require that the weight function accept any value of type T. If the value is not in the support, the weight is zero, and if the value is in the support, the weight is a positive integer. I’m going to further assume that the support and weights are small enough that the total weight fits into an integer.

Aside: If you divide the weight function by the total weight (as a double) then you have the probability mass function of the discrete distribution.

I said way back in part one of this series that System.Random has a “candy-machine interface” problem. Let’s solve that problem right now:

using SCU = StandardContinuousUniform;
public sealed class StandardDiscreteUniform :
IDiscreteDistribution<int>
{
public static StandardDiscreteUniform Distribution(
int min, int max)
{
if (min > max)
throw new ArgumentException();
return new StandardDiscreteUniform(min, max);
}
public int Min { get; }
public int Max { get; }
private StandardDiscreteUniform(int min, int max)
{
this.Min = min;
this.Max = max;
}
public IEnumerable<int> Support() =>
Enumerable.Range(Min, 1 + Max  Min);
public int Sample() =>
(int)((SCU.Distribution.Sample() *
(1.0 + Max  Min)) + Min);
public int Weight(int i) =>
(Min <= i &&
i <= Max) ? 1 : 0;
public override string ToString() =>
\$”StandardDiscreteUniform[{Min}{Max}]”;
}

Finally we can now do things like 10d6 (D&D notation for “sum ten six-sided die rolls”) in a natural, fluent style. I would far prefer:

SDU.Distribution(1, 6).Samples().Take(10).Sum()

to

random.Next(1, 7) + random.Next(1, 7) + random.Next(1, 7) + …

or the equivalent loop nonsense that you typically see.

A common defense of “lower bound inclusive, upper bound exclusive” is that it is natural for programmers in C-family languages to think of an array or list as indexed from 0 (inclusive) to the array length (exclusive). Apparently we want to be able to say things like

myArray[Random.Next(0, myArray.Length)]

and not get confused and have to say myArray.Length-1.  Well, my answer to that (germane) critique is: once again, we’re reasoning at the wrong level. If we want a probability distribution drawn from elements of an array, then don’t mess around with array indexing and choosing the right random number and so on. That is writing code at the level of the mechanisms, not the business domain. Instead, make an object that represents a distribution of elements chosen from a given array and sample from it!

But we are getting ahead of ourselves; we’ll get to that in an upcoming episode.

Previously I made a little “histogram string” helper extension method that is useful for quickly displaying results in a debugger or console; let’s do the same for discrete distributions. Moreover, to make sure that we really are getting the results we expect, I’m going to implement it by sampling rather than just basing it on the support and the corresponding weights:

public static string Histogram<T>(
this IDiscreteDistribution<T> d) =>
d.Samples().DiscreteHistogram();

public static string DiscreteHistogram<T>(
this IEnumerable<T> d)
{
const int sampleCount = 100000;
const int width = 40;
var dict = d.Take(sampleCount)
.GroupBy(x => x)
.ToDictionary(g => g.Key, g => g.Count());
int labelMax = dict.Keys
.Select(x => x.ToString().Length)
.Max();
var sup = dict.Keys.OrderBy(x => x).ToList();
int max = dict.Values.Max();
double scale = max < width ? 1.0 : ((double)width) / max;
return string.Join(“\n”,
sup.Select(s => \$”{ToLabel(s)}|{Bar(s)}));
string ToLabel(T t) =>
string Bar(T t) =>
new string(‘*’, (int)(dict[t] * scale));
}

I gotta say, I am loving local functions in C# 7.

Let’s take it for a test drive and roll 1d10:

StandardDiscreteUniform.Distribution(1, 10).Histogram()

The histogram appears as we’d expect:

``` 1|***************************************
2|***************************************
3|***************************************
4|**************************************
5|***************************************
6|****************************************
7|**************************************
8|***************************************
9|***************************************
10|**************************************
```

Of course, sometimes we will want to see the weights directly:

public static string ShowWeights<T>(
this IDiscreteDistribution<T> d)
{
int labelMax = d.Support()
.Select(x => x.ToString().Length)
.Max();
return string.Join(“\n”, d.Support().Select(s =>
\$”{ToLabel(s)}:{d.Weight(s)}));
string ToLabel(T t) =>
}

We seem to be doing pretty well here; our implementations of each probability distribution are short and sweet, and each one provides a stone in the foundation for building more complex distributions.

Next time on FAIC: we’ll build more simple discrete distributions.

# Fixing Random, part 3

Last time on FAIC I described a first attempt of how I’d like to fix System.Random:

• Make every method static and threadsafe
• Draw a clear distinction between crypto-strength and pseudo-random methods

There were lots of good comments on ways to improve the API and implementation details further. However, even with those improvements, we’re still really just implementing slightly better versions of `rand()` from the 1970s. We can do better than that.

Let’s start by taking a step back and asking what we really want when we’re calling `NextInt` or `NextDouble` or whatever on a source of randomness. What we’re saying is: we have a source of random `T`s for some type `T`, and the values produced will correspond to some probability distribution. We can express that very clearly in the C# type system:

public interface IDistribution<T>
{
T Sample();
}

For this episode I’m just going to look at continuous probability distributions, which I’ll represent as IDistribution<double>. We’ll look at integer distributions and other possible values for type parameter T in the upcoming episodes.

Now that we have an interface, what’s the simplest possible implementation? We have a library that produces a uniform distribution of doubles over the interval [0.0, 1.0), which is called the standard continuous uniform distribution. Let’s make a little singleton class for that:

using SCU StandardContinuousUniform;
public sealed class StandardContinuousUniform :
IDistribution<double>
{
Distribution new StandardContinuousUniform();
private StandardContinuousUniform() { }
public double Sample() => Pseudorandom.NextDouble();
}

Aside: For the purposes of this series I’m going to use pseudo-random numbers; feel free to sub in crypto-strength random numbers if you like it better.

Exercise: make this a non-singleton that takes a source of randomness as a parameter, and feel all fancy because now you’re doing “dependency injection”.

This implementation might seem trivial — and it is — but upon this simple foundation we can build a powerful abstraction.

Now that we have a type that represents distributions, I’d like to build a bridge between distributions and LINQ. The connection between probability distributions and sequences should be pretty obvious; a distribution is a thing that can produce an infinite sequence of values drawn from that distribution.

I’ll make a static class for my helper methods:

public static class Distribution
{
public static IEnumerable<T> Samples<T>(
this IDistribution<T> d)
{
while (true)
yield return d.Sample();
}

What does this give us? We instantly have access to all the representational power of the sequence operation library. Want a sum of twelve random doubles drawn from a uniform distribution? Then say that:

SCU.Distribution.Samples().Take(12).Sum()

Aside: You might be thinking “if there is such a strong connection between distributions and infinite sequences then why not cut out the middleman by making IDistribution extend the IEnumerable interface directly, rather than requiring a Samples() helper method?” That’s an excellent question; the answer will become clear in a few episodes.

For testing and pedagogic purposes I’d like to be able to very quickly visualize a distribution from the console or in the debugger, so here’s another couple extension methods:

public static string Histogram(
this IDistribution<double> d, double low, double high) =>
d.Samples().Histogram(low, high);

public static string Histogram(
this IEnumerable<double> d, double low, double high)
{
const int width = 40;
const int height = 20;
const int sampleCount = 100000;
int[] buckets = new int[width];
foreach(double c in d.Take(sampleCount))
{
int bucket = (int)
(
buckets.Length * (c  low) /
(high  low));
if (0 <= bucket && bucket < buckets.Length)
buckets[bucket] += 1;
}
int max = buckets.Max();
double scale =
max < height ? 1.0 : ((double)height) / max;
return string.Join(“”,
Enumerable.Range(0, height).Select(
r => string.Join(“”, buckets.Select(
b => b * scale > (height  r) ? ‘*’ : ‘ ‘)) + “\n”))
+ new string(‘-‘, width) + “\n”;
}

Exercise: Rescue the above princess in a single LINQ expression without getting Jon to do it for you. (I haven’t actually tried this; I’d be interested to know if it is possible. 🙂 )

We can now do this:

SCU.Distribution.Histogram(0, 1)

and get the string:

```**** ********** *    *   * * *******   *
****************************************
****************************************
****************************************
****************************************
****************************************
****************************************
****************************************
****************************************
****************************************
****************************************
****************************************
****************************************
****************************************
****************************************
****************************************
****************************************
****************************************
****************************************
----------------------------------------```

Which obviously is the desired probability distribution histogram, so I’m pretty pleased.

Extension methods and fluent programming are nice and all, but what else does this get us? Well, we can very concisely produce derived probability distributions. For example, suppose we want the normal distribution:

using static System.Math;
using SCU = StandardContinuousUniform;

public sealed class Normal : IDistribution<double>
{
public double Mean { get; }
public double Sigma { get; }
public double μ => Mean;
public double σ => Sigma;
public readonly static Normal Standard = Distribution(0, 1);
public static Normal Distribution(
double mean, double sigma) =>
new Normal(mean, sigma);
private Normal(double mean, double sigma)
{
this.Mean = mean;
this.Sigma = sigma;
}
// Box-Muller method
private double StandardSample() =>
Sqrt(2.0 * Log(SCU.Distribution.Sample())) *
Cos(2.0 * PI * SCU.Distribution.Sample());
public double Sample() => μ + σ * StandardSample();
}

And of course we can graph it:

Normal.Distribution(1.0, 1.5).Histogram(4, 4)

```                        ***
******
********
**********
***********
************
**************
**************
****************
******************
******************
********************
**********************
************************
************************
****************************
******************************
*******************************
*********************************
----------------------------------------
```

If you want a list with a hundred thousand normally distributed values with a particular mean and standard deviation, just ask for it:

Normal.Distribution(1.0, 1.5).Samples().Take(100000).ToList()

Hopefully you can see how much more useful, readable and powerful this approach is, compared to messing around with System.Random directly. We want to move the level of abstraction of the code away of the mechanistic details of loops and lists and whatnot; the code should be at the level of the business domain: distributions and population samples.

Exercise: Try implementing the Irwin-Hall distribution (hint: it’s already written in this episode!)

Exercise: Try implementing the Gamma distribution using the Ahrens-Dieter algorithm.

Exercise: Try implementing the Poisson distribution as an IDistribution<int>.

In the next few episodes of FAIC: we’ll stop looking at continuous distributions and take a deeper look at the use cases for our new interface when the type argument represents a small set of discrete value, like Booleans, small ints or enums.

# Fixing Random, part 2

Last time on FAIC I complained bitterly about the shortcomings of System.Random. In this episode, I’ll take a stab at the sort of thing I’d like to see in a redesigned random library. This is the “fix the trivial problems” episode, and then we’ll go deeper from there.

The first thing I want is two static methods, one that gives me an actually-random integer from 0 to Int32.MaxValue, and one that gives me a random double from 0.0 (inclusive) to 1.0 (exclusive).  Not pseudo-random, but actually indistinguishable from a true random uniform distribution. Here’s an attempt:

using CRNG = System.Security.Cryptography.RandomNumberGenerator;
public static class BetterRandom
{
public static int NextInt()
{
crng.Value.GetBytes(bytes.Value);
return BitConverter.ToInt32(bytes.Value, 0) & int.MaxValue;
}
public static double NextDouble()
{
while (true)
{
long x = NextInt()  0x001FFFFF;
x <<= 31;
x |= (long)NextInt();
double n = x;
const double d = 1L << 52;
double q = n / d;
if (q != 1.0)
return q;
}
}
}

It’s pretty straightforward, but let’s take a quick look. First, all the state is thread-local, so we trade a small amount of per-thread overhead and per-call indirection for thread safety, which is probably a good tradeoff. We’re going to be using the same four-byte buffer over and over again, so I figured it is probably worthwhile to cache it and avoid the hit to collection pressure; however this is just a guess and I would want to verify that with empirical tests.

There are other possible performance problems here; for example, is it worthwhile to generate a few thousand random bytes at once, cache them, and then gradually use them up? Or is the per-call cost of GetBytes sufficiently low that this is not an issue? I’d also want to empirically check that.

I want only positive integers; clearing the top bit by and-ing with 0x7FFFFFF does that nicely without changing the distribution.

For a random double, we know that doubles have 52 bits of precision, so we generate a random 52 bit integer and make that the numerator of a fraction. We want to guarantee that 1.0 is never a possible output, so we reject that possibility; one in every few billion calls we’ll do some additional calls to `NextInt`. I can live with that.

Exercise: An alternative, and possibly better solution, would be to build the double from bits directly, rather than doing an expensive division. I have not actually written the code or done the math here; as an exercise, does doing so produce a better distribution in any way?

Now that we have a crypto-strength threadsafe random number library, we can build a cheap pseudo-random library out of it:

public static class Pseudorandom
{
new Random(BetterRandom.NextInt()));
public static int NextInt() =>  prng.Value.Next();
public static double NextDouble() =>  prng.Value.NextDouble();
}

We’ve solved two of the main problems with Random: historically it gave highly correlated values when called repeatedly from one thread, and it is not thread safe. We now have a single prng per thread, and it is seeded with a crypto-strength random seed, not the current time.

The problem of the candy-machine interface has gone away because I’ve eliminated that method entirely! We’ll come back to this point in a later episode.

I think this is already a major improvement, in that 99% of the questions about mis-uses of Random on StackOverflow would not exist if this had been the received randomness library in the first place, and I am very happy to learn that at long last, these problems have been fixed in the core implementation.

What about performance? The crypto RNG is a lot slower than the pseudo-RNG, and there is definitely overhead for the thread local infrastructure. But: who cares? Most programs would  not be observably slowed down by these improvements, and for the ones that are slowed down, you go right ahead and make a cheap, thread-unsafe version if you need it for performance.

We should always make good tradeoffs; if the cost of eliminating a whole class of bugs is a tiny performance hit, we should make that tradeoff. The entire premise of managed languages is that we trade a small amount of performance for improved developer productivity. That’s why we have things like a GC, immutable strings, checked array access, and so on; these all trade a small amount of performance for more safety, robustness and productivity. And that’s why we have unsafe pointers; if you need to abandon some safety to get performance, C# should let you do that. But we should default to safety!

A commenter noted last time that recent implementations of System.Random solve this problem using some similar techniques; it’s instructive to compare my four-line solution here to a solution actually engineered to be performant at scale. As always, remember that the code on my blog is for pedagogy and not intended to be an example of industrial-quality code. You’ll notice that for instance they make only the rarely-used parts of the algorithm thread-safe. That seems like a reasonable tradeoff between performance and safety.

Already I think we’re off to a good start, but we can do a lot better still. Like I said last time, these are the trivial problems. There are a lot more problems we could be solving. Two that come to mind:

• The received PRNG is simply not very good in terms of its randomness; there are far better algorithms that produce harder-to-predict outputs with similar performance and similar amount of state.
• A common use for PRNGs is to generate a random-seeming but actually-deterministic sequence for the purposes of randomizing a game. But the received implementation has no way to say “save the current state of the PRNG to disk and restore it later”, or any such thing.

I’m not going to address either of these problems in this series, though both are fascinating problems that deserve posts of their own. I’m going to take a different approach to improving how we deal with randomness; I want to look at the problem at a higher level of abstraction than the low-level details of how the random numbers are generated and what the state of the generator is. To really improve the state of the art of dealing with probability in C#, we’ll need some more powerful types.

Next time on FAIC: I’ll make some useful classes for common continuous probability distributions.