Bean Machine Retrospective, part 3

Introducing Beanstalk

Last time I introduced Bean Machine Graph, a second implementation of the PPL team’s Bayesian inference algorithm. We can compare and contrast the two implementations:

  • BMG emphasizes mechanisms; BMG programs are all about constructing a graph. BM emphasizes business logic
  • A point which I did not emphasize yesterday but which will come up again in this series: BMG requires many node constructions be annotated with semantic types like “probability” or “positive real”. BM lacks all “type ceremony”.
  • BMG programs represent operations such as addition as verbose calls to node factories. BM concisely represents sums as “x + y ”, logs as “x.log()”, and so on

In short, the BMG user experience is comparatively not a great experience for data scientists, in the same way that writing in machine code is not great for line-of-business programmers. We wished to automatically convert models written in Bean Machine into equivalent programs which constructed a graph, and then got the improved inference performance of BMG.

OK… how?

A program which translates one language to another is called a compiler, and that’s where I came in. We code named the compiler “Beanstalk”. I started working on Beanstalk in October of 2019 given this brief:

Construct a compiler which takes as its input a Bean Machine model, queries and observations, and deduces from it the equivalent BMG graph (if there is one).

If we can solve that problem effectively then we get the best of both worlds. Data scientist users can write models using a pleasant Python syntax easily integrated into their existing PyTorch workflows, but get the inference performance afforded by BMG. The “concept count” — the number of APIs that the data scientist must understand — increases only slightly, but their workflows get faster. That is a win all around.

Does it work today?

Yes. Barely. If for some strange reason you want to play around with it, you can obtain the code from github. The error messages are terrible, the compiler is “over fit” to specific models that we needed to get compiled, and it is quite slow, but for simple models it does work. At the time the team was shut down we were actively extending both BMG and the compiler to handle more general models efficiently.

Early on we realized that of course the compiler is a means to an end, not an end in itself. What users really wanted was a more efficient inference engine that just happened to use BMG as its back end, so that’s the main API. You can call BMGInference().infer(), passing in queries, observations, and so on, just like any other Bean Machine inference engine; the compiler will analyze the source code of the queries, produce a graph, and call BMG’s inference engine to produce samples from the queries posteriors.

It has a few other useful APIs as well that are more like a traditional source-in-source-out compiler.

  • BMGInference().to_graph() produces a graph object without starting inference.
  • BMGInference().to_python() and to_cpp() produce Python and C++ source code fragments which construct the graph.
  • BMGInference().to_dot() produces a program in the DOT graph visualization language; that’s what I used to make the graph diagrams in this series.

Next time on FAIC: What were my initial assumptions when tasked with writing a “compiler” that extracts a graph from the source code of a model? And is it really a compiler, or is it something else?

5 thoughts on “Bean Machine Retrospective, part 3

  1. Pingback: The Morning Brew - Chris Alcock » The Morning Brew #3603

  2. This may be obvious, but it struck me that this sounds similar to the CLR or JVM where bytecode _could_ be run directly, but is usually converted to machine code by a JIT compiler for faster execution. Is that the sort of conceptual model you had in mind as you worked on creating an API that abstracted the compilation of a BMG? Or did you see the graph useful on its own?

    Were graphs designed to be composable, either for performance or convenience?

    • We made that analogy a lot during the initial design, yes.

      Adding a function abstraction to subgraphs to make them composable was going to be a v2 feature. On the wish list but not necessary for v1.

  3. Sounds a bit like Postscript -> PDF translation, where you redefine all the IO primitives to log what’s happening, and then run the input code to collect the output log. Loops get unrolled etc automatically.

Leave a comment