# Fixing Random, part 18

Before that silly diversion I mentioned that we will be needing the empty distribution; today, we’ll implement it. It’s quite straightforward, as you’d expect. [Code for this episode is here.]

public sealed class Empty<T> : IDiscreteDistribution<T>
{
public static readonly Empty<T> Distribution = new Empty<T>();
private Empty() { }
public T Sample() =>
throw new Exception(“Cannot sample from empty distribution”);
public IEnumerable<T> Support() =>
Enumerable.Empty<T>();
public int Weight(T t) => 0;
}

Easy peasy. Now that we have this, we can fix up our other distributions to use it. The `WeightedInteger` factory becomes:

public static IDiscreteDistribution<int> Distribution(
IEnumerable<int> weights)
{
List<int> w = weights.ToList();
if (w.Any(x => x < 0))
throw new ArgumentException();
if (!w.Any(x => x > 0))
return Empty<int>.Distribution;
[…]

And the `Bernoulli` factory becomes:

public static IDiscreteDistribution<int> Distribution(
int zero, int one)
{
if (zero < 0 || one < 0)
throw new ArgumentException();
if (zero == 0 && one == 0)
return Empty<int>.Distribution;
[…]

And the `StandardDiscreteUniform` factory becomes:

public static IDiscreteDistribution<int> Distribution(
int min, int max)
{
if (min > max)
return Empty<int>.Distribution;
[…]

And the `Projected` factory becomes:

public static IDiscreteDistribution<R> Distribution(
IDiscreteDistribution<A> underlying, Func<A, R> projection)
{
var result = new Projected<A, R>(underlying, projection);
if (result.weights.Count == 0)
return Empty<R>.Distribution;
[…]

And one more thing needs to change. Our computation in `SelectMany` assumed that none of the total weights are zero. Easily fixed:

int lcm = prior.Support()
.Select(a => likelihood(a).TotalWeight())
.Where(x => x != 0)
.LCM();

We also have a division by total weight; don’t we have to worry about dividing by zero? Nope. Remember, the empty distribution’s support is the empty sequence, so when we then say:

var w = from a in prior.Support()
let pb = likelihood(a)
from b in pb.Support()
group prior.Weight(a) * pb.Weight(b) *
lcm / pb.TotalWeight()
by projection(a, b);

If `prior.Support()` is empty then the whole query is empty and so the division is never executed. If `prior.Support()`is not empty but one of the `pb.Support()` is empty then there is no`b` from which to compute a group key. We never actually divide by total weight, and so there is no division by zero error to avoid.

That was relatively painless, but it is probably still very unclear why we’d ever need an empty distribution. It seems to be like a little bomb hiding in the program, waiting for someone to sample it. Have we just committed another “null reference” design fault? In a few episodes we’ll see what benefits justify the costs.

Next time on FAIC: We’ve been having a lot of fun treating distributions as monads that we can use query comprehensions on, but is that really the best possible syntax?

# Fixing Random, bonus episode 1

I just thought of a really cute application of the stochastic workflow technology we’ve been working on; most of the series has already been written but it fits in here, so I’m going to insert this extra bonus episode. We’ll implement the zero value next time.

Code for this bonus episode is here.

You are probably familiar with the famous “Monty Hall” problem, but if not, here it is:

• You’re on a game show hosted by Monty Hall, the handsome Canadian fellow pictured above.
• Before you there are three closed doors; behind a uniformly randomly selected door there is a new car; behind the other two there is nothing.
• You get to choose a door, and you get what is behind it as a prize: either a car, or nothing.
• You randomly choose a door, again by some uniform process.
• Monty — who knows where the car is — now always opens a door that meets two criteria: it does not have the car behind it, and it is not the door you chose.
• To clarify: if you chose the door with the car, Monty chooses one of the remaining two doors by a uniform random choice. If you chose a door without the car, Monty only has one door he can open, and he opens that one.
• Monty gives you the opportunity to switch your choice to the other still-closed door.
• Assuming you wish to maximize your probability of winning the car, should you switch doors or stay with your original choice?

Aside: I’ve tried to be very precise in my description of the game for the purposes of our analysis. In the real game as played on television there were irrelevant details such as: the “prizes” behind the other two doors were goats or other bizarre, undesirable items, and so on. But there were also germane differences between the real game and our model above; for example, in the real game Monty would sometimes offer choices like “do you want to switch your door, or forget about the doors entirely and take the prize that is in this box?” and it is not clear by what process Monty decided to offer those choices. In this simplified version of the game I’ve removed all human agency from Monty; for our purposes, Monty is just a machine that is following an algorithm that involves generating random outcomes.

Exercise 1: If you don’t already know the solution, work it out. The answer is below.

.

.

.

.

.

You are two-to-one more likely to win the car if you switch than if you stay. But don’t take my word for it. Let’s solve the problem with computers, not by thinking!

Plainly the key to the problem is what is the distribution of Monty’s choice? Monty chooses a random door, but is observed to not pick a door with a car or the door which you picked. We can represent that as a two-parameter likelihood function:

IDiscreteDistribution<int> doors = SDU.Distribution(1, 3);
IDiscreteDistribution<int> Monty(int car, int you) =>
from m in doors
where m != car
where m != you
select m;

There’s no logical difficulty in adding more parameters to a likelihood function; think of the parameters as a tuple if that makes you feel better.

Now we can answer the question. Here’s the probability distribution of winning if you do not switch:

var noSwitch1 =
from car in doors
from you in doors
from monty in Monty(car, you)
select car == you ? “Win” : “Lose”;
Console.WriteLine(noSwitch1.ShowWeights());

And the output is:

```Win:1
Lose:2```

As predicted by thinking, you are twice as likely to lose if you do not switch. Computers for the win!

Exercise 2: Wait a minute, we never even used the value of range variable `monty` in the query. How is it possible that adding a `from` clause to the query changes its outcome when the sampled value is not even used?!?

Exercise 3: OK smart person, if you thought that one was easy, take a look at this one.

We have our likelihood function `Monty()` which is just a query comprehension, and our `noSwitch1` which is also just a query comprehension. We can make the program a little bit shorter by combining them together in the obvious way:

var noSwitch2 =
from car in doors
from you in doors
from monty in doors
where monty != car
where monty != you
select car == you ? “Win” : “Lose”;

And if we print out the weights of that one… uh oh.

```Win:1
Lose:1```

I would have thought this program fragment to be logically the same as before, but this gives weights of 1:1 when we know the correct answer is 1:2.

Where did I go wrong?