Fixing Random, part 30

Last time on FAIC I posed and solved a problem in Bayesian reasoning involving only discrete distributions, and then proposed a variation on the problem whereby we change the prior distribution to a continuous distribution, while preserving that the likelihood function produced a discrete distribution.

The question is: what is the continuous posterior when we are given an observation of the discrete result?

More specifically, the problem I gave was: suppose we have a prior in the form of a process which produces random values between 0 and 1. We sample from that process and produce a coin that is heads with the given probability. We flip the coin; it comes up heads. What is the posterior distribution of coin probabilities?

I find this question a bit hard to wrap my head around. Here’s one way to think about it: Suppose we stamp the probability of the coin coming up heads onto the coin. We mint and then flip a million of those coins once each. We discard all the coins that came up tails. The question is: what is the distribution of probabilities stamped on the coins that came up heads?

Let’s remind ourselves of Bayes’ Theorem. For prior P(A) and likelihood function P(B|A), that the posterior is:

P(A|B) = P(A) P(B|A) / P(B)

Remembering of course that P(A|B) is logically a function that takes a B and returns a distribution of A and similarly for P(B|A).

But so far we’ve only seen examples of Bayes’ Theorem for discrete distributions. Fortunately, it turns out that we can do almost the exact same arithmetic in our weighted non-normalized distributions and get the correct result.


Aside: A formal presentation of the continuous version of Bayes Theorem and proving that it is correct would require some calculus and distract from where I want to go in this episode, so I’m just going to wave my hands here. Rest assured that we could put this on a solid theoretical foundation if we chose to.


Let’s think about this in terms of our type system. If we have a prior:

IWeightedDistribution<double> prior = whatever;

and a likelihood:

Func<double, IWeightedDistribution<Result>> likelihood = whatever;

then what we want is a function:

Func<Result, IWeightedDistribution<double>> posterior = ???

Let’s suppose our result is Heads. The question is: what is posterior(Heads).Weight(d) equal to, for any d we care to choose? We just apply Bayes’ Theorem on the weights. That is, this expression should be equal to:

prior.Weight(d) * likelihood(d).Weight(Heads) / ???.Weight(Heads)

We have a problem; we do not have an IWeightedDistribution<Result> to get Weight(Heads) from to divide through. That is: we need to know what the probability is of getting heads if we sample a coin from the mint, flip it, and do not discard heads.

We could estimate it by repeated computation. We could call

likelihood(prior.Sample()).Sample()

a billion times; the fraction of them that are Heads is the weight of Heads overall.

That sounds expensive though. Let’s give this some more thought.

Whatever the Weight(Heads) is, it is a positive constant, right? And we have already abandoned the requirement that weights have to be normalized so that the area under the curve is exactly 1.0. Positive constants do not affect proportionality. We do not actually need to compute the denominator at all in order to solve a continuous Bayesian inference problem; we just assume that the denominator is a positive constant, and so we can ignore it.

So posterior(Heads) must produce a distribution such that posterior(Heads).Weight(d) is proportional to:

prior.Weight(d) * likelihood(d).Weight(Heads)

But that is just a non-normalized weight function, and we already have the gear to produce a distribution when we are given a non-normalized weight function; we can use our Metropolis class from two episodes ago. It can take a non-normalized weight function, an initial distribution and a proposal distribution, and produce a weighted distribution from it.


Aside: Notice that we don’t even need prior to be a distribution that we can sample from; all we need is its weight function.


That was all very abstract, so let’s look at the example I proposed last time: a mint with poor quality control produces coins with a particular probability of coming up heads; that’s our prior.

Therefore we’ll need a PDF that has zero weight for all values less than 0 and greater than 1. We don’t care if it is normalized or not.

Remember, this distribution represents the quality of coins that come from the mint, and the value produced by sampling this distribution is the bias of the coin towards heads, where 0 is “double tailed” and 1 is “double headed”.

Let’s choose a beta distribution.


Aside: I’ve implemented the very useful gamma and beta distributions, but I won’t bother to show them in the blog. The code is tedious and explaining the mathematics would get us pretty off-topic. The short version is that the beta distribution is a distribution on numbers between 0 and 1 that can take on a variety of shapes. See the source code repository for the details of the implementation, and Wikipedia for an explanation of the distribution; various possible shapes for its two parameters are given below:

Probability density function for the Beta distribution

The code for the Beta and Gamma distributions and the rest of the code in this episode can be found here.


Let’s take Beta.Distribution(5, 5) as our prior distribution. That is, the “fairness” of the coins produced by our terrible mint is distributed like this:

var prior = Beta.Distribution(5, 5);
Console.WriteLine(prior.Histogram(0, 1));

                  ****                   
                 ******                 
                ********                
               **********               
               **********               
              ************              
              ************              
             **************             
             ***************            
            ****************            
            ****************            
           ******************           
          ********************          
          ********************          
         **********************         
        ***********************         
        ************************        
       **************************       
     ******************************     
----------------------------------------

Most of the coins are pretty fair, but there is still a significant fraction of coins that produce 90% heads or 90% tails.

We need a likelihood function, but that is easy enough. Let’s make an enumerated type for coins called Result with two values, Heads and Tails:

IWeightedDistribution<Result> likelihood(double d) => 
  Flip<Result>.Distribution(Heads, Tails, d);

We can use Metropolis to compute the posterior, but what should we use for the initial and proposal distributions? The prior seems like a perfectly reasonable guess as to the initial distribution, and we can use a normal distribution centered on the previous sample for the proposal.

IWeightedDistribution<double> posterior(Result r) =>
  Metropolis<double>.Distribution(
    d => prior.Weight(d) * likelihood(d).Weight(r), // weight
    prior, // initial
    d => Normal.Distribution(d, 1)); // proposal

But you know what, let’s once again make a generic helper function:

public static Func<B, IWeightedDistribution<double>> Posterior<B>(
  this IWeightedDistribution<double> prior,
  Func<double, IWeightedDistribution<B>> likelihood) =>
    b => 
      Metropolis<double>.Distribution(
        d => prior.Weight(d) * likelihood(d).Weight(b),
        prior,
        d => Normal.Distribution(d, 1));

And now we’re all set:

var posterior = prior.Posterior(likelihood);
Console.WriteLine(posterior(Heads).Histogram(0, 1));

                     ****               
                    ******              
                   *******              
                  *********             
                 ***********            
                ************            
                ************            
               *************            
               **************           
               ***************          
              ****************          
             ******************         
             ******************         
            *******************         
            ********************        
           **********************       
          ************************      
         *************************      
       *****************************    
----------------------------------------

It is subtle, but if you compare the prior to the posterior you can see that the posterior is slightly shifted to the right, which is the “more heads” end of the distribution.


Aside: In this specific case we could have computed the posterior exactly. It turns out that the posterior of a beta distribution in this scenario is another beta distribution, just with different shape parameters. But the point is that we don’t need to know any special facts about the prior in order to create a distribution that estimates the posterior.


Does this result make sense?

The prior is that a coin randomly pulled from this mint is biased, but equally likely to be biased towards heads or tails. But if we get a head, that is evidence, however slight, that the coin is not biased towards tails. We don’t know if this coin is close to fair, or very biased towards heads, or very biased towards tails, but we should expect based on this single observation that the first and second possibilities are more probable than the prior, and the third possibility is less probable than the prior. We see that reflected in the histogram.

Or, another way to think about this is: suppose we sampled from the prior many times, we flipped each coin exactly once, and we threw away the ones that came up tails, and then we did a histogram of the fairness of those that came up heads. Unsurprisingly, the coins more likely to turn up tails got thrown away more often than the coins that came up heads, so the posterior histogram is biased towards heads.


Exercise: Suppose we changed the scenario up so that we were computing the posterior probability of two flips. If we get two heads, obviously the posterior will be even more on the “heads” side. How do you think the posterior looks if we observe one head, one tail? Is it different than the prior, and if so, how? Try implementing it and see if your conjecture was correct.


What have we learned? That given a continuous prior, a discrete likelihood, and an observation, we can simulate sampling from the continuous posterior.

Why is this useful? The coin-flipping example is not particularly interesting, but of course coin-flipping is a stand-in for more interesting examples. Perhaps we have a prior on how strongly a typical member of a population believes in the dangers of smoking; if we observe one of them smoking, what can we deduce is the posterior strength of their beliefs? Or perhaps we have a prior on whether a web site user will take an action on the site, like buying a product or changing their password, or whatever; if we observe them taking that action, what posterior deductions can we make about their beliefs?

As we’ve seen a number of times in this series, it’s hard for humans to make correct deductions about posterior probabilities even when given good priors; if we can make tools available to software developers that help us make accurate deductions, that has a potentially powerful effect on everything from the design of developer tools to the design of public policy.


Next time on FAIC: Thus far in this series we’ve been very interested in the question “given a description of a distribution, how can I sample from it efficiently?” But there are other things we want to do with distributions; for example, suppose we have a random process that produces outcomes, and each outcome is associated with a gain or loss of some amount of money. What is the expected amount we will gain or lose in the long run? We’ll look at some of these “expected value” problems, and see how the gear we’ve developed so far can help compute the answers.

 

Advertisements

Fixing Random, part 29

[It is] a spectacular vindication of the principle that each individual coin spun individually is as likely to come down heads as tails and therefore should cause no surprise that each individual time it does.

Thus Guildenstern (or is it Rosencrantz?) in Tom Stoppard’s re-imagining of Hamlet “Rosencrantz and Guildenstern Are Dead“. If you haven’t seen it already, go watch the movie, or at least the first scene.

It helps to know the plot of Hamlet: Before the play begins, prince Hamlet’s uncle Claudius has murdered Hamlet’s father, become king, and married Hamlet’s mother Gertrude. Hamlet is understandably perturbed by this sequence of events. Claudius and Gertrude have invited Hamlet’s college friends Rosencrantz and Guildenstern to cheer him up, but SPOILER ALERT in a series of misadventures they end up dead, soon to be followed by everyone else; the plot, such as it is, of R&G Are Dead is this story told from their confused, uninformed, and amnesiac perspective.

.

.

.

Welcome back. As you now know, if you did not before, Rosencrantz and Guildenstern are flipping (or, as they say, spinning) a series of coins. In the film sometimes they are flipping a different coin each time, which Rosencrantz (or is it Guildenstern?) wins, and sometimes they’re flipping the same coin several times in a row. Either way, the coin flipper (whatever his name is) hits an enormously unlikely sequence of heads.

Let’s make up a variation on this scenario to see what we can learn about posterior  distributions. (Code for this episode can be found here.)

Suppose we have two kinds of coins: fair, and double-headed. A fair coin has a 50-50 probability of coming up heads or tails; a double-headed coin always comes up heads.

enum Coin { Fair, DoubleHeaded }

Now let’s suppose we have a bag of coins. 999 are fair, and one is double-headed. Rosencrantz is going to pull a coin from the bag at random. Our prior is that the selection is random over the thousand coins in the bag, so:

var coins = new[] { Fair, DoubleHeaded };
var prior = coins.ToWeighted(999, 1);

Once he has his coin, Rosencrantz flips it, and of course, observes that it is heads.

The question I want to explore is: what is the posterior probability that Rosencrantz just flipped the double-headed coin?

That is, Rosencrantz draws a coin, flips it, it comes up heads, and now the question to you is: what would be fair odds for a bet on the question “is it the double-headed coin or a regular coin that was just flipped?” Prior to the observation the fair odds are 1 to 999, but after we observe the flip, those odds change because we have more information.

Reasoning backwards like this is a little tricky. (Of course we call it the posterior distribution because we are reasoning backwards, in case that was unclear.) Again, let’s break it down:

  • Prior to flipping the coin, the probability of getting the double-headed coin is the aptly-named prior probability: 1 in 1000, which is odds of 1 to 999.
  • If Rosencrantz flipped the coin and got tails, that would be solid evidence that he did not get the double-headed coin; the posterior probability of having the double headed coin is zero. That is, the posterior probability is smaller: we’ve gone from 0.1% to 0%, a small decrease.
  • If Rosencrantz got heads, that is very weak evidence that he got a double-headed coin, but it is not zero evidence. There should be some small increase over the prior in the posterior.

Of course we don’t need to do the math by hand; we know from earlier in this series that we can exactly solve for posterior probabilities by using a combination of Where and SelectMany query operators:

var results = new[] { Heads, Tails };
IDiscreteDistribution<Result> likelihood(Coin c) =>
  results.ToWeighted(1, c == Fair ? 1 : 0);

var c1 = from c in prior
         from r in likelihood(c)
         where r == Heads
         select c;
Console.WriteLine(c1.ShowWeights());

        Fair:999
DoubleHeaded:2

If Rosencrantz flipped heads then the probability that he drew the double-headed coin is slightly increased over the prior. The prior probability of having the double-headed coin is 1 in 1000; the posterior probability is 2 in 1001. That’s almost twice as likely, but still pretty unlikely.

Again, let’s make sure that we understand what this means. It means that if we did millions of repetitions of this experiment where Rosencrantz picks a coin and flips it once, and if we discarded all the repetitions in which Rosencrantz flipped tails, then in the remaining repetitions on average the ratio of double-headed coins to fair coins chosen would be about 2 to 999.

What if we increase the number of flips of that same coin, and he got two heads?

var c2 = from c in prior
         from r1 in Likelihood(c)
         where r1 == Heads
         from r2 in Likelihood(c)
         where r2 == Heads
         select c;
Console.WriteLine(c2.ShowWeights());

        Fair:999
DoubleHeaded:4

The posterior probability of having the double-headed coin goes up again, this time to 4 in 1003.

What if we increased it to ten flips (again, of the same coin), and they’re all heads? By the time we get to ten heads, there are two possibilities: the exactly one-in-a-thousand chance that Rosencrantz got the double-headed coin, or the approximately one-in-a-thousand chance that he got a fair coin and flipped ten heads in a row; the ratio of those two probabilities is the posterior odds. At this point it becomes pretty much even money whether he has the double-headed coin or a fair coin. (As you might expect, the exact odds are 999 “fair” to 1024 “double-headed”, which is pretty darn close to 50-50.)

If we observe eleven or more heads, it becomes much more likely that Rosencrantz has drawn the one-in-a-thousand double-headed coin than that he is lucky enough to observe that sequence of heads from a fair coin, and of course by the time we get to 156 heads, it is absurdly more likely to be the double-headed coin than a fair coin.

This is a cute problem and all, but it seems a little out of place at this point in our series; we stopped talking about discrete distributions that we could solve exactly for posteriors a while back. I’m discussing it now because all this is in the way of introducing a variation on the above problem:

  • We have a mint with really poor quality control; it produces coins that are out of balance, so that some of them favour heads and some of them favour tails.
  • To model this, let’s say that the mint samples from some continuous distribution that produces values strictly between 0.0 and 1.0. Let’s say that we know this distribution.
  • Each coin that is minted is modeled as a Bernoulli distribution of heads and tails, with the given probability of heads.
  • Rosencrantz flips a random coin that was obtained from that mint; this is our prior.
  • We observe that Rosencrantz gets heads.
  • What is the posterior distribution on the fairness of that coin?

When I first started thinking about this problem I found it quite tricky to get it clear in my head, but this continuous version is not fundamentally different than the discrete version I gave above. In both cases we have a prior distribution of coin fairness that we sample from to get a coin. In both cases we then obtain an observation of “heads”. And in both cases, we can use that observation to deduce something about the likely provenance of the coin. Whether we’re in the continuous case or the discrete case, flipping heads is evidence that the coin is more likely to be on the “heads heavy” side of the prior distribution. But how much more likely?

In the all-discrete case we could compute that posterior exactly. The question before us now is: given the continuous prior, the discrete likelihood function and an observation of a discrete value, can we compute the weight function of the continuous posterior? That is, can we answer the question “what is the probable fairness level of this coin given that we’ve observed one head?” And of course, given all that information, can we also produce a distribution object that we can sample from?

Give it some thought.


Next time on FAIC: We will attempt to do so using the tools we’ve developed thus far.

Fixing Random, part 28

Last time on FAIC we implemented a technique for sampling from a non-normalized target PDF:

  • Find an everywhere-larger helper PDF that we can sample from.
  • Sample from it.
  • Accept or reject the sample via a coin flip with the ratio of weights in the target distribution and the helper distribution.

This technique totally works, but it has a few drawbacks:

  • It’s not at all clear how to find a good helper PDF without humans intervening.
  • Even with a pretty tight-fitting helper, you potentially still end up rejecting a lot of samples.

The first problem is the big one. It would be great if there was a technique that didn’t require quite so much intervention from experts.

Today I’ll describe just such a technique; it is called the “Metropolis algorithm” after one of its inventors, Nicholas Metropolis. [Code for this episode is here.]

Nicholas_Metropolis_cropped.PNG


Aside: The co-authors of the 1953 paper that first presented this algorithm are unfairly not credited in the name of the algorithm, but “the Metropolis algorithm” sounds snappy and cool, whereas “the Metropolis-Rosenbluth-Rosenbluth-Teller-and-Teller algorithm” is a bit of a mouthful.

In modern code we usually use a variation called the “Metropolis-Hastings algorithm“, but I’m not going to discuss it in this series.


Without further ado, here’s a sketch of the Metropolis algorithm:

  • We have a non-normalized PDF f(t) that takes a T and returns a double. We want to produce arbitrarily many samples of T that conform to this distribution.
  • The first time that Sample() is called we produce the initial sample.
  • Now, obviously, if we could produce a random initial sample that matched the distribution described by f, we’d already have solved the problem! All that we require is a random initial sample that is somewhere in the support of f.
  • We’ll call the distribution of the first sample the initial distribution.
  • The initial sample becomes the current sample, which is returned.
  • Every subsequent time that Sample() is called, we use the current sample to generate a candidate for the new sample; more specifically, we produce a proposal distribution of possible next samples.
  • We sample the proposal distribution to get a proposed sample.
  • We compute the weights f(current) and f(proposed)
  • We divide the proposed weight by the current weight.
  • If the quotient is greater than or equal to 1.0 then the proposed sample automatically becomes the new current sample.
  • Otherwise, we flip a Bernoulli coin with that probability. Heads, we accept the proposed sample and it becomes the current sample. Tails, we keep the current sample.

Aside: Do you see why we don’t care if the PDFs are non-normalized in this algorithm? All we care about is the ratio of the weights when determining the Bernoulli distribution. Non-normalized PDFs have the same ratios as normalized PDFs!


Aside: There are a few technical restrictions on the proposal distribution that I’m not going to discuss; for details, see the Wikipedia page on ergodicity.


That might have been a bit abstract. Let’s say in type system terms what we need. First off, we have the target distribution we’re attempting to sample from:

Func<T, double> target;

Next, we have an initial distribution to get the first sample:

IDistribution<T> initial;

And finally, we have a function that produces proposal distributions:

Func<T, IDistribution<T>> proposals;

But… wait a minute. This looks really familiar.

  • We’re producing a sequence of samples.
  • We randomly choose an initial value.
  • We randomly choose each subsequent value based on a probability distribution determined only by the previous value.

This is a Markov process!

Apparently we can use a Markov process to simulate sampling from an arbitrary PDF, assuming that the restrictions on the initial and proposal distributions are met.

Well this should be easy to implement, given that we’ve already implemented Markov processes, so let’s do it. This time, the fun will all be in the factory:

sealed class Metropolis<T> : IWeightedDistribution<T>
{
  private readonly IEnumerator<T> enumerator;
  private readonly Func<T, double> target;
  public static Metropolis<T> Distribution(
    Func<T, double> target, 
    IDistribution<T> initial, 
    Func<T, IDistribution<T>> proposal)
  {
    var markov = Markov<T>.Distribution(initial, transition);
    return new Metropolis<T>(
      target, markov.Sample().GetEnumerator());
    IDistribution<T> transition(T d)
    {
      T candidate = proposal(d).Sample();
      return Flip<T>.Distribution(
        candidate, d, target(candidate) / target(d));
    }
  }
  private Metropolis(
    Func<T, double> target,
    IEnumerator<T> enumerator)
  {
    this.enumerator = enumerator;
    this.target = target;
  }
  public T Sample()
  {
    this.enumerator.MoveNext();
    return this.enumerator.Current;
  }
  public double Weight(T t) => this.target(t);
}

Let’s try it out. We’ll need three things. For our target PDF, let’s take the mixed PDF we looked at last time:

double Mixture(double x) =>
  Exp(x * x) + Exp((1.0  x) * (x  1.0) * 10.0);

What should our initial distribution be? All doubles are in the support of this distribution, so we can choose any, but we’d like it if the double was likely to be high-weight. Let’s suppose that we don’t know the exact shape of this distribution, but we do know that it has a lump near the origin, so we’ll just use a standard normal distribution as our initial value.

What should our proposal distribution be? It needs to be based on the current value, not biased to consistently get larger or smaller, and give us the possibility of any value in the support of our target distribution being produced. How about a normal distribution centered on the current value? That seems reasonable.

You know what, I’m going to make a helper method just for this case, since it seems likely to be pretty common:

public static Metropolis<double> NormalMetropolis(
    this Func<double, double> weight) =>
  Metropolis<double>.Distribution(
    weight,
    Normal.Standard,
    d => Normal.Distribution(d, 1.0));

Finally let’s run it and see if we get a reasonable histogram out:

NormalMetropolis(Mixture).Histogram(2, 2)

                            ***         
                            ***         
                            ***         
                            ***         
                           *****        
                  ****     *****        
                 *******  ******        
                ******** *******        
               *****************        
              *******************       
             ********************       
            *********************       
            *********************       
           ***********************      
          ************************      
         *************************      
       ****************************     
      ******************************    
   **********************************   
----------------------------------------

Nice!

This seems to work really well, but unfortunately there are a few problems to deal with. Some of them are:

  • We need an initial distribution that produces a value in the support. However, this is usually not too hard. And suppose by some accident we do end up getting a zero-weight element as the first sample; what’s going to happen? Probably the proposal distribution will pick a non-zero-weight element, and it will then automatically win, so hey, maybe the first sample is bad. No big deal.
  • Let’s think a bit more about that issue though. Even if the first sample has positive weight, we might have a sequence of unusually low-weight samples at the beginning. If we start in a low-weight region there might be some time spent randomly walking around before we stumble into a high-weight region. Of course once we are in a high-weight region we are likely to stay there, which is good…
  • … unless the distribution is “multimodal”. That is, there are multiple high-weight regions. Our example distribution in this episode is multimodal, but the two “lumps” are really close together and are “bridged” by a reasonable high-probability region; imagine a distribution where the “lumps” were far apart with a super-low-probability region between them. You can end up “stuck” in one of the lumps for a long time, and this can lead to situations where long sequences of samples have way more correlation between them than we would expect if we were sampling from the “real” distribution.
  • The previous point is particularly emphasized by the fact that in a Metropolis simulation of sampling from a distribution, we frequently get repeated values in samples; that sort of correlation is typically not observed at all were we drawing from the “real” distribution rather than a Markovian simulation of it.

All of these problems can be dealt with by a variety of techniques; you’ve probably already thought of some of them:

  • We could “burn” the first few dozen / hundred / whatever of the samples so that we get past any initial low-weight region.
  • If samples are too correlated with each other, we could use a proposal distribution with a higher variance. (For example, we could use a normal distribution with a larger standard deviation.) That would increase the number of rejected samples, which increases repeats, but it would decrease the number of samples that were all close to each other but not rejections.
  • Or, we could sample from a Bernoulli at a probability of our choosing on each sample, and if it’s heads, we skip enumerating the value but still keep it as our new state internally; that would decrease the amount of similarity between subsequent samples, while increasing the average computational cost of each sample.
  • Or, we could generate samples in batches of a hundred or a thousand or whatever, randomly shuffle the batch, and then enumerate the shuffled batch, to decrease the amount of similarity between subsequent samples.

I’m not going to do any of these things, but I’m sure you can see how they would be done.

We’ve made really good progress here; we have an efficient general mechanism for sampling from a well-behaved PDF, so long as we have some idea of the initial distribution and a good guess at a proposal distribution. We also have some ideas for “tuning parameters” — that is, knobs that experts can turn to make tradeoffs between performance, correlation, and so on, and the knobs are simple enough that you could imagine writing a program to try different settings and figure out what works best.


Next time on FAIC: We’re going to seemingly take a step backwards and talk about a problem in discrete Bayesian reasoning: given a bag of coins, some unfair, and some observations of the results of flipping a coin pulled from the bag, what can we deduce about the coin we pulled?

 

Fixing Random, part 27

Last time on FAIC we went through a loose, hand-wavy definition of what it means to have a “weighted” continuous distribution: our weights are now doubles, and given by a Probability Distribution Function; the probability of a sample coming from any particular range is the area under the curve of that range, divided by the total area under the function. (Which need not be 1.0.)

A central question of the rest of this series will be this problem: suppose we have a delegate that implements a non-normalized PDF; can we implement Sample() such that the samples conform to the distribution?

The short answer is: in general, no.

A delegate from double to double that is defined over the entire range of doubles has well over a billion billion possible inputs and outputs. Consider for example the function that has a high-probability lump in the neighbourhood of -12345.678 and another one at 271828.18 and is zero everywhere else; if you really know nothing about the function, how would you know to look there?

We need to know something about the PDF in order to implement Sample().

The long answer is: if we can make a few assumptions then sometimes we can do a pretty good job.


Aside: As I’ve mentioned before in this series: if we know the quantile function associated with the PDF then we can very easily sample from the distribution. We just sample from a standard continuous uniform distribution, call the quantile function with the sampled value, and the value returned is our sample. Let’s assume that we do not know the quantile function of our PDF.


Let’s look at an example. Suppose I have this weight function:

double Mixture(double x) =>
  Exp(x * x) + Exp((1.0  x) * (x  1.0) * 10.0);

If we graph that out, it looks like this:

Mixed.png

I called it “mixture” because it is the sum of two (non-normalized) normal distributions. This is a valid non-normalized PDF: it’s a pure function from all doubles to a non-negative double and it has finite area under the curve.

How can we implement a Sample() method such that the histogram looks like this?


Exercise: Recall that I used a special technique to implement sampling from a normal distribution. You can use a variation on that technique to efficiently sample from a mixture of normal distributions; can you see how to do so? See if you can implement it.

However, the point of this exercise is: what if we did not know that there was a trick to sampling from this distribution? Can we sample from it anyways?


The technique I’m going to describe today is, once more, rejection sampling. The idea is straightforward; to make this technique work we need to find a weighted “helper” distribution that has this property:

The weight function of the helper distribution is always greater than or equal to the weight function we are trying to sample from.

Now, remember, the weight function need not be “scaled” so that the area under the curve is 1.0. This means that we can multiply any weight function by a positive constant, and the distribution associated with the multiplied weight function is the same. That means that we can weaken our requirement:

There exists a constant factor such that the weight function of the helper distribution multiplied by the factor is always greater than or equal to the weight function we are trying to sample from.

This will probably be more clear with an example.

Let’s take the standard normal distribution as our helper. We already know how to sample from it, and we know it’s weight function. But it just so happens that there exists a constant — seven — such that multiplying the constant factor by the helper’s weight function dominates our desired distribution:

Screen Shot 2019-03-12 at 8.39.07 PM.png

Again, we’re going to throw some darts and hope they land below the red curve.

  • The black curve is the weight function of the helper — the standard normal distribution — multiplied by seven.
  • We know how to sample from that distribution.
  • Doing so gives us an x coordinate for our dart, distributed according to the height of the black curve; the chosen coordinate is more likely to be in a higher region of any particular width than a lower region of the same width.
  • We’ll then pick a random y coordinate between the x axis and the black curve.
  • Now we have a point that is definitely below the black line, and might be below the red line.
  • If it is not below the red line, reject the sample and try again.
  • If it is below the red line, the x coordinate is the sample.

Let’s implement it!

Before we do, once again I’m going to implement a Bernoulli “flip” operation, this time as the class:

sealed class Flip<T> : IWeightedDistribution<T>
{
  public static IWeightedDistribution<T> Distribution(
    T heads, T tails, double p)

You know how this goes; I will skip writing out all that boilerplate code. We take values for “heads” and “tails”, and the probability (from 0 to 1) of getting heads. See the github repository for the source code if you care.

I’m also going to implement this obvious helper:

static IWeightedDistribution<bool> BooleanBernoulli(double p) =>
  Flip<bool>.Distribution(true, false, p);

All right. How are we going to implement rejection sampling? I always begin by reasoning about what we want, and what we have. By assumption we have a target weight function, a helper distribution whose weight function “dominates” the given function when multiplied, and the multiplication factor. The code practically writes itself:

public class Rejection<T> : IWeightedDistribution<T>
{
  public static IWeightedDistribution<T> Distribution(
      Func<T, double> weight,
      IWeightedDistribution<T> helper,
      double factor = 1.0) =>
    new Rejection<T>(weight, dominating, factor);

I’ll skip the rest of the boilerplate. The weight function is just:

public double Weight(T t) => weight(t);

The interesting step is, as usual, in the sampling.

Rather than choosing a random number for the y coordinate directly, instead we’ll just decide whether or not to accept or reject the sample based on a Bernoulli flip where the likelihood of success is the fraction of the weight consumed by the target weight function; if it is not clear to you why that works, give it some thought.

public T Sample()
{
  while(true)
  {
    T t = this.helper.Sample();
    double hw = this.helper.Weight(t) * this.factor;
    double w = this.weight(t);
    if (BooleanBernoulli(/ hw).Sample())
      return t; 
  }
}

All right, let’s take it for a spin:

var r = Rejection<double>.Distribution(
  Mixture, Normal.Standard, 7.0);
Console.WriteLine(r.Histogram(2.0, 2.0));

And sure enough, the histogram looks exactly as we would wish:

                             **         
                            ***         
                            ***         
                           *****        
                           *****        
                 ******   ******        
                ****************        
               *****************        
              *******************       
             ********************       
            *********************       
            *********************       
           ***********************      
          ************************      
         *************************      
        ***************************     
      *****************************     
    *********************************   
----------------------------------------

How efficient was rejection sampling in this case? Actually, pretty good. As you can see from the graph, the total area under the black curve is about three times the total area under the red curve, so on average we end up rejecting two samples for every one we accept. Not great, but certainly not terrible.

Could we improve that? Sure. You’ll notice that the standard normal distribution times seven is not a great fit. We could shift the mean 0.5 to the right, and if we do that then we can reduce the multiplier to 4:

MixedBetter.png

That is a far better fit, and if we sampled from this distribution instead, we’d reject a relatively small fraction of all the samples.


Exercise: Try implementing it that way and see if you get the same histogram.


Once again we’ve managed to implement Sample() by rejection sampling; once again, what are the pros and cons of this technique?

  • Pro: it’s conceptually very straightforward. We’re just throwing darts and rejecting the darts that do not fall in the desired area. The darts that do fall in the desired area have the desired property: that samples from a given area arrive in proportion to that area’s size.
  • Con: It is by no means obvious how to find a tight-fitting helper distribution that we can sample such that the helper weight function is always bigger than the target weight function. What distribution should we use? How do we find the constant multiplication factor?

The rejection sampling method works best when we have the target weight function ahead of time so that we can graph it out and an expert can make a good decision about what the helper distribution should be. It works poorly when the weight function arrives at runtime, which, unfortunately, is often the case.


Next time on FAIC: We’ll look at a completely different technique for sampling from an arbitrary PDF that requires less “expert choice” of a helper distribution.

 

Porting old posts, part 3

I’m continuing in my efforts to move and update all my old content from my MSDN blog to ericlippert.com. Today, posts from early October of 2003.


In, out, in-out, make up your mind already

The late-binding code designed for OLE Automation was pretty good, and quite novel for its time, but there were a few problems. The whole idea of late binding is that the caller does not have to know the details of the callee; the fact that the caller must know whether the calling convention is in-out vs out to avoid a memory leak is unfortunate.


A whole lot of nothing
A little more on nothing

What’s the difference between an empty sack and not having a sack at all? Programming languages often conflate these concepts, but separating them, as VB did, is quite confusing also.


Eric’s blog for January 279th, 2003

Once again we return to the many problems of software horology; this episode features commentary from Microsoft notable Raymond Chen, and Brendan Eich, the original designer of JavaScript and later CEO of Mozilla who stepped down after supporting anti-equality initiatives in California; he called the characterization of JavaScript as “scripting for Java objects” as a “scam”, which I found astonishing at the time.

To clarify: I was fully aware for many years prior that JS was not originally designed to be “scripting for Java”, and that the languages did not have much in common beyond superficial syntactic details; that was all well-understood by all participants in the design process. The part that I found astonishing was the characterization of it as scam; that is, a deliberate attempt to mislead developers for profit.

Now that I think about it, I still find it pretty astonishing.


I can’t make my script do nothing
Spot the defect

Successfully pausing a program for some interval is surprisingly complicated, particularly in a world before asynchronous workflows were built into popular languages. Trying to spot defects in a poor implementation demonstrates the sorts of problems you can easily run into if you’re not careful.

It also shows how strongly pausing a program must be tied in to the execution model of the host; this theme recurs in “Why can’t I create the WScript object?”


These four posts seem unrelated but now that I look back, they have a common thread:

For-in revisited

Though ostensibly about looping constructs, the takeaway here really should have been that common tasks which are tedious or error-prone for developers should be in the libraries or language.

Let’s get explicit!

Some languages are do-what-I-mean languages, and some are do-what-I-say languages. There are pros and cons of both. This has consequences for language design.

Why can’t I create the WScript object?
WSC vs WSH

Even after working on or near scripting languages for, at this point, seven years, at this point I still did not understand just how deep the “do what I mean” ethos runs in developers who primarily write administration scripts.

Worse, I lacked both the “beginner mind” of people who are coming to a complex system for the first time, and an appreciation of how unnecessarily confusing the alphabet soup of scripting technologies was for beginners and experienced professionals alike. I learned a lot from these frustrated user comments.


Lots more to come!

 

 

 

 

Fixing Random, part 26

We’ve been mostly looking at small, discrete distributions in this series, but we started this series by looking at continuous distributions. Now that we have some understanding of how to solve probability problems on simple discrete distributions and Markov processes, let’s go back to continuous distributions and see if we can apply some of these learnings to them. (Code for this episode can be found here.)

Let’s start with the basics. What exactly do we mean by a probability distribution function? So far in this series we’ve mostly looked at the discrete analog, a non-normalized probability mass function. That is, for an IDiscreteDistribution<T> we have a weight function that gives us a value for each T. The probability of sampling a particular T is the weight divided by the total weight of all Ts.

Of course, had we decided to go with double weights instead of integers, we could have made a normalized probability mass function: that is, the “weight” of each particular T is automatically divided by the total weight, and we get the probability out. In this scenario, the total weight adds up to 1.0.

We know from our exploration of the weighted integer distribution that we can think of our probability distributions as making a rectangle where various sub-rectangles are associated with particular values; we then “throw a dart” at the rectangle to sample from the distribution; where it lands gives us the sample.

Continuous distributions can be thought of in much the same way. Suppose we have a function from double to double, always non-negative, such that the total area under the curve is 1.0. Here are some examples:

double PDF1(double x) => x < 0.0 | x >= 1.0 ? 0.0 : 1.0;
double PDF2(double x) => x < 0.0 | x >= 1.0 ? 0.0 : 2 * x;
double PDF3(double x) => Exp((x * x)) / Sqrt(PI);

Three PDFs.png

What is the meaning of these as probability distributions? Plainly the “higher” the function, the “more likely” any particular value is, but what does it even mean to say that in our PDF2 and PDF3 distributions, that 0.5 is “less likely” than 0.6 but they are “equally likely” in our PDF1 distribution?

One way to think of it is that again, we “throw a dart” at the area under the curve. Given any subset of that area, the probability of the dart landing inside it is proportional to the area of the subset, and the value sampled is the x coordinate of the dart.

We can make this a little bit more formal by restricting our areas to little rectangles:

  • Take a value, say 0.5.
  • Now take a tiny offset, call it ε. Doesn’t matter what it is, so long as it is “pretty small”.
  • The probability of getting a sample value between 0.5 and 0.5+ε is PDF(0.5)*ε. That is, the area of a thin rectangle.

That’s it! Let’s say we’ve got values 0.5 and 0.6, and an ε of 0.01. How do our three distributions compare with each other near these values?

  • With PDF1, the probability of getting a sample between 0.5 and 0.51 is approximately 0.010, and the probability of getting a sample between 0.6 and 0.61 is also approximately 0.010.
  • With PDF2, the probability of getting a sample between 0.5 and 0.51 is approximately 0.010, and the probability of getting a sample between 0.6 and 0.61 is approximately 0.012.
  • With PDF3, the probability of getting a sample between 0.5 and 0.51 is approximately 0.006, and the probability of getting a sample between 0.6 and 0.61 is approximately 0.004

And moreover, the approximation becomes better as ε gets smaller.


Aside: What if we want to know the probability of getting a value between two doubles that are not “really close together”? In that case we can do “quadrature”: divide it up into a bunch of regions that are all “really small” and sum up the areas of all those little rectangles to get an approximation of the area. Or, we can use integral calculus to find the exact answer. But for the purposes of this series, we’re going to try to find ways to sample from a distribution that has a particular PDF without doing any quadrature approximations or any integral calculus.


All right, we understand what PDFs are. To summarize:

  • A PDF is a pure function from double to double.
  • A PDF is defined over the entire range of doubles.
  • PDF(x) >= 0.0 for all x.
  • Integrating the PDF over the entire real line gives us a total area of 1.0.
  • The probability that a sample is between x and x+ε is approximately PDF(x)*ε, if ε is small.

For our purposes though I’m going to relax those conditions a bit. In the code I want to explore, we’ll assume that a “non-normalized” PDF has these properties:

  • A PDF is a pure function from T to double​…
  • … defined over the entire range of T.
  • PDF(t) >= 0.0 for all t.
  • Integrating the PDF over all possible values of T gives us a total area of A, for some finite positive value A. That is, the distribution is not necessarily “normalized” to have an area of 1.0.  (You’d think that we can “easily” get a normalized distribution from a non-normalized distribution by dividing every weight by A but it is not always easy to know what that area is. But perhaps sometimes we do not need to know! Foreshadowing!)
  • The probability that a sample is within ε of t is approximately PDF(t)*ε/A if ε is small. (Of course, as I noted above, we might not know the normalization constant.)

Aside: That last point is a little vague; what does it mean to be “within ε of some t” if t is not double? We’re going to just not worry about that. It’ll be fine. Particularly since we’re mostly going to look at situations where T is double anyways.

This becomes useful for cases like the multidimensional case where T is (double, double), say, and by “within ε” we mean “inside a square of side √ε” or some such thing. Putting this all on a solid mathematical footing would take us too far off topic; you get the idea and let’s move on. I don’t know if we’ll get to multidimensional distributions in this series or not.


As we’ll see, continuous distributions are a much harder beast to tame than the small discrete distributions we’ve looked at before. However, there are still things we can do with them. Let’s start by making a definition in the type system for a distribution that has one of our relaxed PDFs. It’s awfully familiar:

public interface IWeightedDistribution<T> : IDistribution<T>
{
  double Weight(T t);
}

This means that every discrete distribution gets to be a weighted distribution pretty much “for free”. Let’s implement that:

public interface IDiscreteDistribution<T> : IWeightedDistribution<T>{
  IEnumerable<T> Support();
  new int Weight(T t);
}

And now we have some minor taxes to pay; we’ll need to add

double IWeightedDistribution<R>.Weight(T t) => this.Weight(t);

to all our existing discrete distributions, but that is easily done.

We can also go back and fill in weight functions in our Normal class:

private static readonly double piroot = 1.0 / Sqrt(2 * PI);
public double Weight(double x) => 
  Exp((x  μ) * (x  μ) / (2 * σ * σ)) * piroot / σ;

And our StandardContinuousUniform class:

public double Weight(double x) =>
  0.0 <= x & x < 1.0 ? 1.0 : 0.0;

Easy peasy.

We’ve got a new variation on our old concept of weighting a distribution; let’s see where we can go with this.


Next time on FAIC: We know how to efficiently sample from any small, discrete distribution with integer weights, but distributions based on arbitrary PDFs are much harder beasts to tame. Given an arbitrary non-normalized PDF, can we sample from it?

 

Fixing Random, part 25

Last time on FAIC we implemented the Markov process distribution, which is a distribution over state sequences, where the initial state and each subsequent state is random. There are lots of applications of Markov processes; a silly one that I’ve been wanted to do for literally decades is randomized text generation. (The silly code for this silly episode is here; bring your own corpus!)

In my youth I collected back issues of Scientific American, which I bought at the sadly now defunct Casablanca Books in my home town. I believe I sold all of them back when I moved out to Seattle; they were pretty bulky, though I might still have my autographed-by-Benoit-Mandelbrot copy of the issue about fractals somewhere in the house.

My favourite columns in old SciAms were of course the mathematical / programming games columns by Martin Gardner, Kee Dewdney and Douglas Hofstadter. The one germane to today’s topic was a Dewdney column about “Mark V. Shaney” — what we would now call a “bot”, that was running a Markov process generated by analysis of USENET posts, sampling from the resulting distribution to produce a “Markov chain” sequence (Mark V. Shaney, get it?) and posting the resulting generated text right back to USENET.

I’ve always wanted to replicate that, and today I have my chance. The basic technique is straightforward:

  • Start with a corpus of texts.
  • Break up the text into words.
  • Group the words into sentences.
  • Take the first word of each sentence and make a weighted distribution; the more times this word appears as a first word in a sentence, the higher it is weighted. This gives us our initial distribution.
  • Take every word in every sentence. For each word, generate a distribution of the words which follow it in the corpus of sentences. For example, if we have “frog” in the corpus, then we make a distribution based on the words which follow: “prince” twice, “pond” ten times, and so on. This gives us the transition function.
  • There are a number of ways to handle words which end sentences, but an easy way to do it is simply to have a bunch of “end words” where the transition function gives the empty distribution.
  • Sample repeatedly, and format the sequences of words back into sentences.
  • Hilarity!

OMG let’s do it.

All our probability distributions thus far have been immutable and we should continue that tradition. I’ll use the builder pattern to construct immutable objects. We’ll start by making a categorical distribution builder, and then make a Markov builder from that:

sealed class DistributionBuilder<T>
{
  private Dictionary<T, int> weights = new Dictionary<T, int>();
  public void Add(T t)
  {
    weights[t] = weights.GetValueOrDefault(t) + 1;
  }
  public IDistribution<T> ToDistribution()
  {
    var keys = weights.Keys.ToList();
    var values = keys.Select(k => weights[k]);
    return keys.ToWeighted(values);
  }
}


Aside: Recall that we do not need to worry that we’ve closed over a member that is a mutable collection here, because ToWeighted is going to call ToList on the values.


Our Markov process builder is only slightly more complicated:

sealed class MarkovBuilder<T>
{
  private DistributionBuilder<T> initial =
    new DistributionBuilder<T>();
  private Dictionary<T, DistributionBuilder<T>> transitions =
    new Dictionary<T, DistributionBuilder<T>>();
  public void AddInitial(T t)
  {
    initial.Add(t);
  }
  public void AddTransition(T t1, T t2)
  {
    if (!transitions.ContainsKey(t1))
      transitions[t1] = new DistributionBuilder<T>();
    transitions[t1].Add(t2);
  }
  public Markov<T> ToDistribution()
  {
    var init = initial.ToDistribution();
    var trans = transitions.ToDictionary(
      kv => kv.Key,
      kv => kv.Value.ToDistribution());
    return Markov<T>.Distribution(
      init,
      k => trans.GetValueOrDefault(k, Empty<T>.Distribution));
  }
}

Super! Now all we need is the complete works of Shakespeare, thank you Project Gutenberg. We don’t need to load the whole five megs into memory at once; we can process it line-at-a-time thanks to our builder. We’ll start by splitting up the line stream into a word stream:

static readonly char[] punct =
  “<>,*-()[#]@:%\”/’;_&}”.ToCharArray();
static IEnumerable<string> Words(
    this 
IEnumerable<string> lines) =>
  from line in lines
  from word in line.Split()
  let raw = word.Trim(punct)
  where raw != “”
  select raw;


Exercise: This is a pretty crappy word-splitting algorithm. Basically I’m stripping out any leading or trailing punctuation but I’m leaving in the periods, exclamation marks and question marks. This means that I’m stripping out the trailing single quotes in Shakespearean contractions like th’.

I’m also not canonicalizing the casing; I’ll be treating “The” and “the” as different words, and so on. This is not necessarily the best approach.

Can you come up with a better word-splitting algorithm than this one? This will do for my silly purposes, but it would be nice to have a better algorithm.


A generalized “split a sequence into subsequences” operator is a handy tool to have that is not included in the LINQ sequence operators; let’s make one:

public static IEnumerable<List<T>> Split<T>(
  this IEnumerable<T> items,
  Func<T, bool> last)
{
  var list = new List<T>();
  foreach (T item in items)
  {
    list.Add(item);
    if (last(item))
    {
      yield return list;
      list = new List<T>();
    }
  }
  if (list.Any())
    yield return list;
}

And now making a sentence-extractor is straightforward:

static IEnumerable<List<string>> Sentences(
    this IEnumerable<string> words) =>
  words.Split(w => {
    char c = w[w.Length  1];
    return c == ‘.’ || c == ‘!’ || c == ‘?’;
  });

Again, not great; this would make “Mr.” the end of a sentence for instance. Notice again that I’m leaving the punctuation in the word. Perhaps not the best choice but it will work for our purposes.

Put it all together:

var builder = new MarkovBuilder<string>();
var sentences = File.ReadLines(“shakespeare.txt”)
  .Words()
  .Sentences();
foreach (var sentence in sentences)
{
  builder.AddInitial(sentence[0]);
  for (int i = 0; i < sentence.Count  1; i += 1)
    builder.AddTransition(sentence[i], sentence[i + 1]);
}
var markov = builder.ToDistribution();

Excellent. And now, release the Markovian monkey and we’ll see if Hamlet emerges:

Console.WriteLine(markov
  .Samples()
  .Take(100)
  .Select(x => x.SpaceSeparated())
  .NewlineSeparated());

Here’s a slightly-cleaned-up-for-legibility sample output; I’ve moved around the line breaks to make it easier to read.

SCENE I.

POET.

A very echo with him there. I say away!

HECTOR.

You know the body. Sir by my love him an end.
But your king? Sir John of a purse I grant
that thou brought forth in them with ears.
What’s the rebels with him. Well and others
orchards grew together.

BERTRAM.

Tis time his horse?

Dost thou seek how well but a shame this is
but with the Duke of Eve’s legacy be
the counter-gate which bears will stand
till he hath great world shall be a woman
with my ever is to cunning bade him
for recordation to death.

Exit GLOUCESTER

Re-enter Ghost.

CYMBELINE.

Grim-visag’d war but since thou turn the morn-dew
on this wretched soul elected him fair Helen
then let all our actors are you?

Now are both mine and said well his pitcher
by the heat of mine own desert?

What’s o’clock? I may not you do fight with you.
The mother breeds suspicion hath! Be soon shot.

Exeunt ACT III.

And I think I will leave it there.

It’s not Hamlet, to be sure, but it’s a start.

I also tried the complete works of Jane Austen:

The principal part of the result of your own line from any reply to the ecstasies of meeting he gravely but it in Hertfordshire and I mean to declare I have the days brought you must not. You must be happier than in Highbury with awful object as a greater steadiness had reached the picture of consequence to any objection to each other day came out.

Just… wow.


Next time on FAIC: We’ll turn our attention back to continuous distributions.