Fixing Random, part 14

[Code is here.]


Last time on FAIC we achieved two major results in our effort to build better probability tools. First, we demonstrated that the SelectMany implementation which applies a likelihood function to a prior probability is the bind operation of the probability monad. Second, we gave an implementation of a wrapper object that implements it. It’s action can be summed up as:

  • sample from the prior distribution
  • use the likelihood function to get the conditional distribution
  • sample from the conditional distribution
  • run the projection on the pair of samples to get the result

You probably recall though that I did not implement the Weight function. It’s a little tricky to do so, for two reasons. First, I made the (now somewhat questionable!) decision to make weights integers. If the weights are fractions between 0.0 and 1.0, you can just multiply the weight of the prior sample by the weight of the conditional sample. (And if the weights are logarithms, you can just add them.) It’s trickier with integers. And second, the projection at the end introduces once again the possibility that there will be “collisions”; the projection could pick non-unique values for unique combinations of the samples, that then have to be weighted as the sum.

That’s all a little abstract, so let’s work an example.

Suppose we have a population of people who have been diagnosed with Frob Syndrome, which seems to be linked with height. We’ll divide the population of Frob Syndrome patients into three categories:

enum Height { Tall, Medium, Short }

and let’s suppose in our study population there are five tall people, two medium-height people, and one short person in every eight:

var prior = new List<Height>() TallMediumShort }
  .ToWeighted(5, 2, 1);

(To keep the code short on the page, suppose I have using static directives for each.)

Now let’s suppose we’ve done a survey of each of the tall, medium and short people to learn the severity of their symptoms:

enum Severity { Severe, Moderate, Mild }

At this point I’m going to make the numbers a bit odd to illustrate the mathematics more clearly.  What is the likelihood of a member of each group to report symptoms? Let’s say that 10 out of every 21 tall people report severe symptoms, and the remaining 11 report moderate symptoms. For the medium-height people, 12 out of 17 report moderate symptoms and 5 report mild symptoms. And all the short people report mild symptoms:

var severity = new List<Severity> SevereModerateMild };

IDiscreteDistribution
<Severity> likelihood(Height h)
{
  switch(h)
  {
    case Tall: return severity.ToWeighted(10, 11, 0);
    case Medium: return severity.ToWeighted(0, 12, 5);
    default: return severity.ToWeighted(0, 0, 1);
  }
}

And now let’s suppose we have a recommended prescription level:

enum Prescription { DoubleDose, NormalDose, HalfDose }

Taller people or people with more severe symptoms get a higher dose; shorter people or people with mild symptoms get a smaller dose:

Prescription projection(Height h, Severity s)
{
  switch (h)
  {
    case Tall: return s == Severe ? DoubleDose : NormalDose;
    case Medium return s == Mild ? HalfDose : NormalDose;
    default: return HalfDose;
  }
}

The question now is: what is the probability distribution on prescriptions for this study population?  That is, if we picked a random member of this population, how likely is it that they’d have a double, normal or half dose prescription?

IDiscreteDistribution<Prescription> doses =
  prior.SelectMany(likelihood, projection);

The problem is to work out the weightings of the three possible outcomes.

As I mentioned before, it’s easiest to do this when the weights are fractions because we can then just multiply them and then add them up:

Height        Severity           Prescription
Tall   (5/8)  Severe   (10/21)   DoubleDose (25/84)
Tall   (5/8)  Moderate (11/21)   NormalDose (55/168)
Medium (2/8)  Moderate (12/17)   NormalDose  (3/17)
Medium (2/8)  Mild      (5/17)   HalfDose    (5/68)
Short  (1/8)  Mild      (1/1)    HalfDose    (1/8)

(To save space I’ve elided the zero rows.)

So the probability of a member of this population getting a double dose is 25/84, getting a normal dose is 55/168 + 3/17 = 1439/2856, and getting a half dose is 5/68 + 1/8 = 27/136. Verifying that those add up to 1.0 is left as an exercise.

But we’re going to grit our teeth here and do it all in integers! How might we do that?

Well, we know how to eliminate fractions: multiply all the weights in the first column by 8, and all the weights in the second column by 21 * 17, and none of the proportions will change:

Height      Severity         Prescription
Tall   (5)  Severe   (170)   DoubleDose (850)
Tall   (5)  Moderate (187)   NormalDose (935)
Medium (2)  Moderate (252)   NormalDose (504)
Medium (2)  Mild     (105)   HalfDose   (210)
Short  (1)  Mild     (357)   HalfDose   (357)

So the integer weights are: double dose is 850, normal dose is 935 + 504 = 1439, and half dose is 210 + 357 = 567.

Let’s implement it!

First off, oddly enough there is a Sum() extension method but no Product() extension method, so let’s implement that:

public static int Product(this IEnumerable<int> items) =>
  items.Aggregate(1, (a, b) => a * b);

And I also need to know the total weight of a distribution:

public static int TotalWeight<T>(
    this IDiscreteDistribution<T> d) =>
  d.Support().Select(t => d.Weight(t)).Sum();

And now we can implement the algorithm I just sketched out:

int product = prior.Support()
  .Select(a => likelihood(a).TotalWeight())
  .Product();
var w = from h in prior.Support()
        let ps = likelihood(h)
        from s in ps.Support()
        group prior.Weight(h) ps.Weight(s) *
              product / ps.TotalWeight()
        by projection(h, s);
var dict = w.ToDictionary(g => g.Key, g => g.Sum());
var doses = dict.Keys.ToList();
var weights = dict.Values.ToList();

And sure enough, if we print those last two out:

DoubleDose, NormalDose, HalfDose
850, 1439, 567

Super, we can now work out the weights in our implementation of SelectMany.

But… wait a minute. Why do we have to?

That is, why do we need a Combined wrapper class for SelectMany at all?

We just worked out the weights of every member of the support, and we did so making no assumptions whatsoever about the prior or the likelihood function. We can delete our Combined wrapper class, and replace our implementation of SelectMany with:

public static IDiscreteDistribution<C> SelectMany<A, B, C>(
  this IDiscreteDistribution<A> prior,
  Func<A, IDiscreteDistribution<B>> likelihood,
  Func<A, B, C> projection)
{
  int product = prior.Support()
    .Select(a => likelihood(a).TotalWeight())
    .Product();
  var w = from a in prior.Support()
          let pb = likelihood(a)
          from b in pb.Support()
          group prior.Weight(a) * pb.Weight(b) *
            product / pb.TotalWeight()
          by projection(a, b);
  var dict = w.ToDictionary(g => g.Key, g => g.Sum());
  return dict.Keys.ToWeighted(dict.Values);
}


Exercise: Do you see any potential pitfalls in this implementation of computing the new weights? Give it some thought; I’ll give the answer in the next episode.


We do a small amount of math up front, and in exchange, we have computed the exact resulting probability distribution, which we can sample from efficiently. Just as we did with Where and Select​ in previous episodes.


Aside: Once again, if you trace through all the logic I’ve written so far you will quickly see that it is hugely inefficient in terms of the amount of re-computation it does and garbage it produces. If we were writing this for production code, we’d be a lot more aggressive about finding code paths that do re-computation and eliminating them. The point of this exercise is that our code produces correct, efficient distribution objects out the other end, even if it is a bit wasteful to do so in this particular pedagogic implementation.


Think about the power of this: you can write programs that treat discrete probability distributions over arbitrary types as values, the same way you’d treat integers, strings, sequences, or whatever, as values that obey a particular set of algebraic rules. We can project, condition and combine them together with the same ease that we do today with sequences, and sample from them to boot!

The idea that we can describe a probabilistic workflow, and have as the output of that workflow a new distribution semantically equivalent to the effect of the workflow, but without any long-running sample-and-reject loops due to the conditions, is called inference by the probabilistic programming community.

We’ve seen that we can do inference on arbitrary discrete distributions provided that the supports are small and the weights are small integers; as we’ll see throughout the rest of this series, the problem gets considerably harder as we abandon some of those simplifying assumptions.


Next time on FAIC: I’m going to implement a minor optimization in our weighted integer distribution. After that, we’ll put it all together to show how what we’ve developed so far can be used for Bayesian inference.

 

Advertisements

Fixing Random, part 13

[Code is here.]


Last time on FAIC we discovered the interesting fact that conditional probabilities can be represented as likelihood functions, and that applying a conditional probability to a prior probability looks suspiciously like SelectMany, which is usually the bind operation on the sequence monad. We created a new implementation of SelectManythat creates an object which samples from the prior, calls the likelihood, and then samples from the resulting distribution. Is that the bind operation on the probability distribution monad?


Aside: If you’re completely confused by the preceding paragraph, you might want to read my gentle introduction to monads for OO programmers. Go ahead and read that over if it is not fresh in your mind.


We need the following things to have a monad in C#:

  • We need an “embarrassingly generic” type: some Foo<T> where it can sensibly take on any T whatsoever. IDiscreteDistribution<T> meets that condition.
  • The type represents an “amplification of power” of the underlying type. Indeed it does; it allows us to represent a probability distribution of particular values of that type, which is certainly a new power that we did not have before.
  • We need a way of taking any specific value of any T, and creating an instance of the monadic type that represents that specific value. Singleton.Distribution(t) meets that condition.
  • There is frequently(but not necessarily) an operation that extracts a value of the underlying type from an instance of the monad. Sample() is that operation. Note that sampling a singleton always gives you back the original value.
  • There is a way to “bind” a new function onto an existing instance of the monad. That operation has the signature M<R> SelectMany<A, R>(M<A> m, Func<A, M<R>> f).  We traditionally call it SelectMany in C# because that’s the bind operation on IEnumerable<T>, and it produces a projection on all the elements from a sequence of sequences. As we saw last time, we have this function for probability distributions.
  • Binding the “create a new instance” function to an existing monad must produce an equivalent monad. I think it is pretty clear that if we have an IDiscreteDistribution in hand, call it d, that SelectMany(d, t => Singleton.Distribution(t)) produces an object that has the same distribution that d does. If that’s not clear, play around with the code until it becomes clear to you.
  • Going “the other direction” must also work. That is, if we have a Func<A, IDiscreteDistribution<B>> called f, and a value of A, then SelectMany(Singleton<A>.Distribution(a), f) and f(a) must produce logically the same IDiscreteDistribution<B>. Again, if that’s not clearly true in your mind, step through the code mentally or write some sample code and convince yourself that it is true.
  • Two bind operations “on top of each other” must produce the same logical result as a single bind that is the composition of the two bound functions. That’s maybe a little vague; see Part 7 of my series on monads for details. Suffice to say, we meet this condition as well.

All our conditions are met; IDiscreteDistribution<T> is a monad. So we should be able to use it in a query comprehension, right?

from c in cold
from s in SneezedGivenCold(c)
select s

Unfortunately this gives an error saying that SelectMany cannot be found; what’s up with that?

The query comprehension syntax actually requires a slight variation on the traditional “bind” operation; it requires that we also allow a projection on the end, and that moreover, the projection take both the original value and the transformed value. That is, C# requires us to implement it like this:

public sealed class Combined<A, B, C> :
  IDiscreteDistribution<C>
{
  private readonly List<C> support;
  private readonly IDiscreteDistribution<A> prior;
  private readonly Func<A, IDiscreteDistribution<B>> likelihood;
  private readonly Func<A, B, C> projection;
  public static IDiscreteDistribution<C> Distribution(
      IDiscreteDistribution<A> prior, 
      Func<A, IDiscreteDistribution<B>> likelihood, 
      Func<A, B, C> projection) =>
    new Combined<A, B, C>(prior, likelihood, projection);
  private Combined(
    IDiscreteDistribution<A> prior, 
    Func<A, IDiscreteDistribution<B>> likelihood, 
    Func<A, B, C> projection)
  {
    this.prior = prior;
    this.likelihood = likelihood;
    this.projection = projection;
    var s = from a in prior.Support()
            from b in this.likelihood(a).Support()
            select projection(a, b);
            this.support = s.Distinct().ToList();

  }
  public IEnumerable<C> Support() =>
     this.support.Select(x => x);
  public int Weight(C c) => NOT YET!
  public C Sample()
  {
    A a = this.prior.Sample();
    B b = this.likelihood(a).Sample();
    return this.projection(a, b);
  }
}

And now we can implement SelectMany as

public static IDiscreteDistribution<C> SelectMany<A, B, C>(
    this IDiscreteDistribution<A> prior,
    Func<A, IDiscreteDistribution<B>> likelihood,
    Func<A, B, C> projection) =>
  Combined<A, B, C>.Distribution(prior, likelihood, projection);

and of course if we want a SelectMany with the traditional monad bind signature, that’s just

public static IDiscreteDistribution<B> SelectMany<A, B>(
    this IDiscreteDistribution<A> prior,
    Func<A, IDiscreteDistribution<B>> likelihood) =>
  SelectMany(prior, likelihood, (a, b) => b);

Now that we have aSelectMany, we can write conditional probabilities in comprehension syntax, as before:

var sneezed = from c in cold
              from s in SneezedGivenCold(c)
              select s;

or, if we like, we can extract a tuple giving us both values:

public static IDiscreteDistribution<(A, B)> Joint<A, B>(
    this IDiscreteDistribution<A> prior,
    Func<A, IDiscreteDistribution<B>> likelihood) =>
  SelectMany(prior, likelihood, (a, b) => (a, b));

var joint = cold.Joint(SneezedGivenCold);

and if we graph that, we see that we get the distribution we worked out by hand from last episode:


Console.WriteLine(sneezed.Histogram());
Console.WriteLine(sneezed.ShowWeights());
Console
.WriteLine(joint.Histogram());
Console.WriteLine(joint.ShowWeights());

 No|****************************************
Yes|*****
 No:111
Yes:14

  (No, No)|****************************************
 (No, Yes)|*
 (Yes, No)|
(Yes, Yes)|***
  (No, No):873
 (No, Yes):27
 (Yes, No):15
(Yes, Yes):85

Aside: Of course I am cheating slightly here because I have not yet implemented the weight function on the combined distribution; we’ll get to that next time!


It might seem slightly vexing that C# requires us to implement a variation on the standard bind operation, but in this case it is actually exactly what we want. Why’s that?

Let’s remind ourselves of how we are notating probability distributions. If we have a collection of possible outcomes of type Cold, we notate that distribution as P(Cold); since Cold has two possibilities, this distribution is made up from two probabilities, P(Cold.Yes) and P(Cold.No) which add up to 100%. We represent this in our type system as IDiscreteDistribution<Cold>

A conditional probability distribution P(Sneezed|Cold) is “given a value from type Cold, what is the associated distribution P(Sneezed)“?  In other words, it is Func<Cold, IDiscreteDistribution<Sneezed>>.

What then is P(Cold&Sneezed)?  That is our notation for the joint distribution over all possible pairs. This is made up of four possibilities: P(Cold.No & Sneezed.No), P(Cold.No&Sneezed.Yes), P(Cold.Yes&Sneezed.No), and P(Cold.Yes&Sneezed.Yes), which also add up to 100%.

In our type system, this is IDiscreteDistribution<(Cold, Sneezed)>

Now, remember the fundamental law of conditional probability is:

P(A) P(B|A) = P(A&B)

That is, the probability of A and B both occurring is the probability of A occurring, multiplied by the probability of B occurring given that A has.

That is, we can pick any values from those types, say:

P(Cold.Yes) P(Sneezed.Yes|Cold.Yes) = P(Cold.Yes&Sneezed.Yes)

That is, the probability of some value of A and some value of B both occurring is the probability of the value of A occurring multiplied by the probability of the value of B given that the value of A has occurred.


Aside: “multiplication” here is assuming that the probabilities are between 0.0 and 1.0, but again, squint a little and you’ll see that it’s all just weights. In the next episode we’ll see how to compute the weights as integers by thinking about how to do the multiplication in fractions.


We’ve implemented P(A) as IDiscreteDistribution<A>, we’ve implemented P(B|A) as Func<A, IDiscreteDistribution<B>>, and P(A&B) as IDiscreteDistribution<(A, B)>.

We have a function Joint<A, B>​ that takes the first two and gives you the third, and if you work out the math, you’ll see that the probabilities of each member of the joint distribution that results are the products of the probabilities given from the prior and the likelihood. Multiplication of a prior probability by a likelihood across all members of a type is implemented by SelectMany. 


Coming up on FAIC: We’ll work out the weights correctly, and that will enable us to build an optimized  SelectMany implementation.

Fixing Random, part 12

[Code is here.]


Last time on FAIC we implemented an efficient “conditioned” probability using the Where operator on distributions; that is, we have some “underlying” distribution, and we ask the question “if a particular condition has to be met, what is the derived distribution that meets that condition?” For discrete distributions we can compute that distribution directly and just return it.

There is another kind of conditional probability though, which is much more rich, complex and counter-intuitive, and that is exploring the relationship between “what is the probability of X?” and “what is the probability of Y given that we know X?

For example: pick a random person in the world who has a cold. What is the probability that they sneezed in the last 24 hours? Probably something like 85%.

Now pick a random person who does not have a cold. For them, the probability is maybe more like 3%. In months when I do not have a cold, I sneeze maybe one or two days.

So what we’ve got here is a rather more complex probability distribution; in fact we have two entirely different distributions, and which one we use depends on a condition.

Notice how this is clearly related to our recent discussion of conditioned probabilities, but different. With a “Where” clause we are saying make the support of this distribution smaller because some outcomes are impossible based on a condition. What we’re talking about here is choosing between two (or more) distributions depending on a condition.

The standard notation for this kind of probability in mathematics is a bit unfortunate. We would say something like P(sneezed|no cold ) = 0.03 to represent “3% chance that I sneezed if I didn’t have a cold” and P(sneezed|cold) = 0.85 to represent “85% chance that I sneezed if I had a cold”. That is, the syntax is P(A|B) means “what is the probability of A given that B happened?”

How might we represent this in our system? It seems like IDiscreteDistribution<T> is not rich enough. Let’s just start making some types and see what we can come up with.

“Has sneezed recently” and “has a cold” are Booleans, but I want the types of everything to be very clear in the analysis which follows, so I’m going to make my own custom types:

enum Cold { No, Yes }
enum Sneezed { No, Yes }

I want to be slightly abusive of notation here and say that P(Cold.Yes) and P(Cold.No) are the weights of a probability distribution that I’m going to call by the shorthand P(Cold). Similarly for P(Sneezed); that’s the probability distribution that gives weights to P(Sneezed.Yes) and P(Sneezed.No). Normally we think of P(something) as being a value between 0.0 and 1.0, but if you squint at it, really those values are just weights. It doesn’t matter what convention we use for weights; a bunch of integers that give ratios of probabilities and a bunch of doubles that give fractions have pretty much the same information content.

Plainly what I would very much like is to have IDiscreteDistribution<Cold> be the C# type that represents P(Cold).

But how can we represent our concept of “There’s a 3% chance I sneezed if I do not have a cold, but an 85% chance if I do have a cold?”

That sure sounds like precisely this:

IDiscreteDistribution<Sneezed> SneezedGivenCold(Cold c)
{
  var list = new List<Sneezed>() { Sneezed.No, Sneezed.Yes };
  return c == Cold.No ?
    list.ToWeighted(97, 3) :
    list.ToWeighted(15, 85);
}

That is, if we do not have a cold then the odds are 97 to 3 that we did not sneeze, and if we do have a cold, then the odds is 15 to 85 that we did not sneeze.

I’ve said that I want to represent P(Cold.Yes) and P(Cold.No) by the shorthand P(Cold), and that plainly this in our type system is IDiscreteDistribution<Cold>. Now I want to represent the notion of P(Sneezed) given a value of Cold as P(Sneezed|Cold), which is implemented by the function above. So, what type in our type system is that? Well, suppose we wanted to assign SneezedGivenCold to a variable; what would its type be? Clearly the type of P(Sneezed|Cold) is Func<Cold, IDiscreteDistribution<Sneezed>>!

How interesting! Conditional probabilities are actually functions.

This sort of function has a name; it is called a likelihood function. That is, given some condition, how likely is some outcome?

So that’s interesting, but how is this useful?

Let’s randomly choose a person in the world, where we do not know whether they have a cold or not. What is the probability that they sneezed recently? It depends entirely on the prevalence of colds! If 100% of the world has a cold, then there’s an 85% chance that a randomly chosen person sneezed recently, but if 0% of the world has a cold, then there’s only a 3% chance. And if it is somewhere in between, the probability will be different from either 85% or 3%.

To solve this problem we need to know the probability that the person we’ve chosen has a cold. The probability that some randomly chosen person has a cold is called the prior probability.

What if 10% of the world has a cold? Let’s work it out by multiplying the probabilities:

 Cold      Sneezed       Result
 (prior)   (likelihood)  (conditional)
 10% Yes   85% Yes       8.5% have a cold, and sneezed
           15% No        1.5% have a cold, did not sneeze
 90% No     3% Yes       2.7% do not have a cold, and sneezed
           97% No       87.3% do not have a cold, did not sneeze

Sure enough those probabilities in the right column add up to 100%. The probability that a randomly chosen person in the world sneezed recently (given that these numbers that I made up are accurate) is 8.5% + 2.7% = 11.2%.

The rightmost column of the table that I’ve sketched out here is called the joint probability, which we’ll notate as P(Cold&Sneezed).

We can write this table more compactly like this:

             Cold Yes    Cold No   Total
Sneezed Yes     8.5%        2.7%   11.2%
Sneezed No      1.5%       87.3%   88.8%
Total            10%         90%    100%

The rightmost column of this table is called the marginal probability, so-called because of the way the sums end up at the margins of the table.

What if we expressed the marginal probability as integers? The odds that a random person has sneezed is 11.2% to 88.8%, which if you work out the math, is exactly odds of 14 to 111.


Aside: when I was debugging the code to compute the weights that we will see in a future episode, I got “111” printed out when I was primed to see “112”, having just computed “11.2%” by hand. I almost went on a lengthy bug hunt looking for the non-existing off-by-one error. Fortunately I stopped and double-checked my work, and realized that the 111 represents the 88.8%, not the 11.2%.


How can we do this math given the set of types we’ve created so far? Let’s start with the prior:

var colds = new List<Cold>() { Cold.No, Cold.Yes };
IDiscreteDistribution<Cold> cold = colds.ToWeighted(90, 10);

We’ve got the prior, and we’ve got the likelihood function SneezedGivenCold. We would like to get the marginal probability IDiscreteDistribution<Sneezed>​.

We could implement such a distribution by first sampling from the prior, then calling SneezedFromCold, and then sampling from the returned distribution. Let’s implement it.


Aside: We are of course assuming that the likelihood function is pure.


public sealed class Combined<A, R> : IDiscreteDistribution<R>
{
  private readonly List<R> support;
  private readonly IDiscreteDistribution<A> prior;
  private readonly Func<A, IDiscreteDistribution<R>> likelihood;
  public static IDiscreteDistribution<R> Distribution(
      IDiscreteDistribution<A> prior,
      Func<A, IDiscreteDistribution<R>> likelihood) =>
    new Combined<A, R>(prior, likelihood);
  private Combined(
    IDiscreteDistribution<A> prior,
    Func<A, IDiscreteDistribution<R>> likelihood)
  {
    this.prior = prior;
    this.likelihood = likelihood;
    var q = from a in prior.Support()
            from b in this.likelihood(a).Support()
            select b;
    this.support = q.Distinct().ToList();
  }
  public IEnumerable<R> Support() =>
    this.support.Select(x => x);
  public R Sample() =>
    this.likelihood(this.prior.Sample()).Sample();
  public int Weight(R r) => WE’LL COME BACK TO THIS ONE
}

We haven’t implemented Weight, but we don’t need it to run a histogram. Let’s try it out:

Combined<Cold, Sneezed>.Distribution(cold, SneezedGivenCold)
  .Histogram()

 No|****************************************
Yes|****

Sure enough, it looks like there is about an 11% chance that a randomly chosen person sneezed, given these distributions.

Now, of course as I have done throughout this series, let’s make a little helper function to make the call sites look a little nicer:

public static IDiscreteDistribution<R> MakeCombined<A, R>(
    this IDiscreteDistribution<A> prior,
    Func<A, IDiscreteDistribution<R>> likelihood) => 
  Combined<A, R>.Distribution(prior, likelihood);

Once again, that should look very familiar! I should change the name of this helper.

If you are still surprised at this point, you have not been paying attention. I’ve already made Select and Where, so the obvious next step is…

public static IDiscreteDistribution<R> SelectMany<A, R>(
    this IDiscreteDistribution<A> prior,
    Func<A, IDiscreteDistribution<R>> likelihood) => 
  Combined<A, R>.Distribution(prior, likelihood);

the bind operation on the probability monad.

And the inelegant call site above is now the much more clear:

cold.SelectMany(SneezedGivenCold).Histogram()


Coming up on FAIC: We’ll verify that the distribution type really is a monad, and make a few tweaks to get it working with query comprehension syntax. Then we’ll figure out how to implement the Weight​ function.

Fixing Random, part 11

[Code is here.]


Last time on FAIC we drew a line in the sand and required that the predicates and projections used by Where and Select on discrete distributions be “pure” functions: they must complete normally, produce no side effects, consume no external state, and never change their behaviour. They must produce the same result when called with the same argument, every time. If we make these restrictions then we can get some big performance wins out of Where and Select. Let’s see how.

The biggest problem we face is that possibly-long-running loop in the Where; basically we are rejection-sampling the distribution, and we know that can take a long time. Is there a way to directly produce a new distribution that can be efficiently sampled?

Of course there is, and it was a little silly of us to miss it. Let’s make a helper method:

public static IDiscreteDistribution<T> ToWeighted<T>(
    this IEnumerable<T> items,
    IEnumerable<int> weights)
{
  var list = items.ToList();
  return WeightedInteger.Distribution(weights)
    .Select(i => list[i]);
}

There’s an additional helper method I’m going to need in a couple of episodes, so let’s just make it now:

public static IDiscreteDistribution<T> ToWeighted<T>(
    this IEnumerable<T> items,
    params int[] weights) =>
items.ToWeighted((IEnumerable<int>)weights);

And now we can delete our Conditioned class altogether, and replace it with:

public static IDiscreteDistribution<T> Where<T>(
  this IDiscreteDistribution<T> d,
  Func<T, bool> predicate)
{
  var s = d.Support().Where(predicate).ToList();
  return s.ToWeighted(s.Select(t => d.Weight(t)));
}

Recall that the WeightedInteger factory will throw if the support is empty, and return a Singleton or Bernoulli if it size one or two; we don’t have to worry about that. No more maybe-long-running loop! We do at most two underlying samples per call to Sample now.


Exercise: We’re doing probabilistic workflows here; it seems strange that we are either 100% rejecting or 100% accepting in Where. Can you write an optimized implementation of this method?

public static IDiscreteDistribution<T> Where<T>(
  this IDiscreteDistribution<T> d,
  Func<T, IDiscreteDistribution<bool>> predicate)

That is, we accept each T with a probability distribution given by a probabilistic predicate.

This one will be easier to implement once we have the gear we’re going to develop a few episodes from now, but I mention it now to get you thinking about the problem.


That takes care of optimizing Where. What about optimizing Select?

Of course we can do the same trick. We just have to take into account the fact that the projection might cause some members of the underlying support to “merge” their weights:

public static IDiscreteDistribution<R> Select<A, R>(
  this IDiscreteDistribution<A> d,
  Func<A, R> projection)
{
  var dict = d.Support()
              .GroupBy(projection, a => d.Weight(a))
              .ToDictionary(g => g.Key, g => g.Sum());
  var rs = dict.Keys.ToList();
  return Projected<int, R>.Distribution(
    WeightedInteger.Distribution(
      rs.Select(r => dict[r])),
      i => rs[i]);
}

That is: we compute the new support, and the weights for each element of it. Now we can construct a weighted integer distribution that chooses an offset into the support.


Exercise: Why did I not write the far more concise:

  return rs.ToWeighted(rs.Select(r => dict[r])));

?


Let’s think for a minute about whether this really does what we want. Suppose for example we do a projection on a Singleton:

  • We’ll end up with a single-item dictionary with some non-zero weight.
  • The call to the weighted integer factory will return a Singleton<int> that always returns zero.
  • We’ll build a projection on top of that, and the projection factory will detect that the support is of size one, and return a Singleton<T> with the projected value.

Though we’ve gone through a bunch of unnecessary object constructions, we end up with what we want.

Furthermore, suppose we have a projection, and we do a Select on that: we avoid the problem we have in LINQ-to-objects, where we build a projection on top of a projection and end up with multiple objects representing the workflow.


Aside: That last sentence is an oversimplification; in LINQ-to-objects there is an optimization for this scenario; we detect Select-on-Select (and Select-on-Where and so on) and build up a single object that represents the combined projection, but that single object still calls all the delegates. So in that sense there are still many objects in the workflow; there’s just a single object coordinating all the delegates.


In this scenario we spend time up front calling the projection on every member of the support once, so we don’t need to ever do it later.

Again, I’m not trying make the series of factories here the most efficient it could be; we’re creating a lot of garbage. Were I building this sort of system for industrial use, I’d be more aggressive about taking early outs that prevent this sort of extra allocation. What I’m trying to illustrate here is that we can use the rules of probability (and the fact that we have pure predicates and projections) to produce distribution objects that give the correct results.


Aside: Of course, none of this fixes the original problems with weighted integer: that in the original constructor, we do not optimize away “all weights zero except one” or “trailing zeros”. Those improvements are still left as exercises.


Next time on FAIC: we’ve seen that we can use Where​ to filter a distribution to make a conditioned distribution; we’ll look at a more rich and complex way to represent conditional probabilities, and discover a not-so-surprising fact about our distribution interface.

Fixing Random, part 10

[Code is here.]


We’re going to spend the next while in this series on expressing probabilities that have some sort of extra condition associated with them. We’ll start with a simple example: I want to roll a fair six-sided die, but discard the threes. Why? Maybe I really dislike threes. Doesn’t matter; we will want to be able to create new distributions by putting conditions on other distributions, so let’s figure out how to do so.

Now, of course, we could create the desired distribution easily enough; as we know from last time on FAIC, it is

WeightedInteger.Distribution(0, 1, 1, 0, 1, 1, 1)

But suppose we have a distribution already in hand which we wish to apply a condition to, post hoc. I’m not going to labour the point; we can see how to do that easily enough. It’s a variation on our “projected” distribution from a while back:

public sealed class Conditioned<T> :
  IDiscreteDistribution<T>
{
  private readonly List<T> support;
  private readonly IDiscreteDistribution<T> underlying;
  private readonly Func<T, bool> predicate;
  public static IDiscreteDistribution<T> Distribution(
    IDiscreteDistribution<T> underlying,
    Func<T, bool> predicate)
  {
    var d = new Conditioned<T>(underlying, predicate);
    if (d.support.Count == 0)
      throw new ArgumentException();
    if (d.support.Count == 1)
      return Singleton<T>.Distribution(d.support[0]);
    return d;
  }
  private Conditioned(
    IDiscreteDistribution<T> underlying,
    Func<T, bool> predicate)
  {
    this.underlying = underlying;
    this.predicate = predicate;
    this.support = underlying.Support()
      .Where(predicate)
      .ToList();
  }
  public T Sample()
  {
    while (true) // Ouch
    {
      T t = this.underlying.Sample();
      if (this.predicate(t))
          return t;
    }
  }
  public IEnumerable<T> Support() =>
    this.support.Select(x => x);
  public int Weight(T t) =>
    predicate(t) ? underlying.Weight(t) : 0;
}

Given our discussion a few episodes back about projected distributions and the fact that creating them has the same signature as “Select”, I’m sure it will come as no surprise to you that I’m going to make a helper method:

public static IDiscreteDistribution<T> Where<T>(
    this IDiscreteDistribution<T> d,
    Func<T, bool> predicate) =>
  Conditioned<T>.Distribution(d, predicate);

And you know what this means: I get to use comprehension syntax again!

var noThrees = from roll in SDU.Distribution(1, 6)
               where roll != 3
               select roll;
Console.WriteLine(noThrees.Histogram());

And we get as expected, no threes:

1|***************************************
2|***************************************
4|***************************************
5|***************************************
6|***************************************

However, there are some problems here. That possibly-long-running loop is deeply worrisome. In fact, dealing with the existence of that loop will be the major theme of the rest of this series, in one way or another. (This should feel familiar; of course this is just another version of the “rejection sampling” problem from a few episodes ago.)

We’ll talk about that loop problem more in the next episode. For the remainder of this episode I want to examine an assumption I made in the code above; it’s the same assumption that I made when we discussed projections. We just blew right past it, but this assumption introduces what might be characterized as a serious bug.

Consider the following code, which uses only sequences, no distributions:

int filterOut = 3;
Func<int, bool> predicate = x => x != filterOut;
var range = Enumerable.Range(1, 6).Where(predicate);
Console.WriteLine(range.CommaSeparated());
filterOut = 4;
Console.WriteLine(range.CommaSeparated());


Aside: I got sick of typing string.Join(… blah blah blah…) so I made a handful of extension methods to be a little more fluent. See the source code for details.)


If you recall that sequences are computed lazily and lambdas are closed over variables, not values, then this output should be expected:

1,2,4,5,6
1,2,3,5,6

What if we do the same thing to distributions?

int filterOut = 3;
Func<int, bool> predicate = x => x != filterOut;
var d = SDU.Distribution(1, 6).Where(predicate);
Console.WriteLine(d.Samples().Take(10).CommaSeparated());
filterOut = 4;
Console.WriteLine(d.Samples().Take(10).CommaSeparated());

As we’d expect, we first get no threes, and then no fours:

1,1,4,6,6,5,4,1,2,6
6,5,6,5,1,5,6,3,2,1

So what’s the problem?

Console.WriteLine(d.Support().CommaSeparated());

1,2,4,5,6

Uh oh. We just produced a 3 in a distribution that does not list 3 in its support!

Why? Because I computed the support in the constructor and cached it, but the support changed when the predicate changed its behaviour, and it is now out-of-date. The object is now lying to us.

Obviously I could fix this problem easily enough: do not compute the support in the constructor; compute it dynamically, on demand. Is that the right thing to do?

If we do, we lose the ability to reject “null” distributions — distributions with no support — and therefore it is possible to get into a situation where sampling hangs because the predicate is never true. (We could re-check the support before every sample, and throw if the support is empty, but that seems expensive.)

Furthermore, as we’ll see in the next few episodes, we can do some pretty good optimizations if we can assume that the predicate does not change.

I’m therefore going to state right now that the predicate passed to Where (and the projection passed to Select) must be “pure” functions when they are intended to operate on probability distributions.

A “pure” function for our purposes has the following characteristics:

  • It must complete normally; in its regular operation it should not hang and it should not throw. It is permissible for a pure function to throw if its preconditions are violated; for example, a pure function that is documented as not taking negative numbers is permitted to throw an argument exception if passed a negative number. But a pure function should not throw or hang when given a normal, expected input.
  • It must not produce any side effect. For example, it must not write to the console or mutate a global variable or anything like that.
  • Its outputs must depend solely on its inputs; a pure method does not produce one result on its first call and a different result on its second call because something in the world changed; the only reason to produce a different result is if the arguments passed in are different.

Pure functions are nice to work with in two particular ways. First, you can reason about their correctness entirely “locally”; you do not have to consider the state of the world at the time the call is made, and you do not have to worry that the state of the world will change depending on how many times the call happens. Second, you can get performance wins by taking advantage of the fact that you know that two calls with the same arguments will produce the same result.

Unfortunately, in C# we have no good way of detecting a non-pure method at compile time and outlawing them as arguments to Where and Select; we will have to rely on the user being smart enough to not shoot themselves in the foot here.


Next time on FAIC: Now that we are requiring purity in Where and Select, what optimizations can we make to the discrete distributions we’ve created so far?