Here’s a pair of code changes:

dog.drink(); —becomes—> if (dog != null) dog.drink();

dog.bark(); —becomes—> if (dog != null) dog.bark();

Now here’s the trick. **Suppose we represent each of these code changes as a tree.** The root has two children, “before” and “after”. The child of each side is the **abstract syntax tree** of the before and after code fragment, so our two trees are:

** Now suppose we anti-unify those two trees; what would we get?** We’d get this pattern:

dog.h0(); —> if (dog != null) dog.h0();

Take a look at that. We started with two code changes, anti-unified them, and now we have *a template for making new code edits!* We could take this template and write a tool that transforms all code of the form “an expression statement that calls a method on a variable called dog” into the form “an if statement that checks dog for null and calls the method if it is not null”.

What I’m getting at here is: *if we have a pair of small, similar code edits, then we can use anti-unification to deduce a generalization of those two edits, in a form from which we could then build an automatic refactoring.*

But what if we have *three* similar code edits?

edit 1: dog.drink(); —> if (dog != null) dog.drink();

edit 2: dog.bark(); —> if (dog != null) dog.bark();

edit 3: cat.meow(); —> if (cat != null) cat.meow();

Let’s take a look at the pairwise anti-unifications:

1 & 2: dog.h0(); —> if (dog != null) dog.h0();

1 & 3: h1.h0(); —> if (h1 != null) h1.h0();

2 & 3: the same.

Anti-unifying the first two makes a more specific pattern than any anti-unification involving the third. But the really interesting thing to notice here is that the anti-unifications of 1&3 and 2&3 is itself a generalization of the anti-unification of 1&2!

Maybe that is not 100% clear. Let’s put all the anti-unifications into a tree, where the more general “abstract” patterns are at the top, and the individual “concrete” edits are at the leaves:

*Each parent node is the result of anti-unifying its children.* This kind of tree, where the leaves are specific examples of a thing, and each non-leaf node is a generalization of everything below it, is called a **dendrogram**, and they are very useful when trying to visualize the output of a hierarchical clustering algorithm.

Now imagine that we took hundreds, or thousands, or hundreds of thousands of code edits, and somehow managed to work out a useful dendrogram of anti-unification steps for *all* of them. This is a computationally difficult problem, and in a future episode, I might describe some of the principled techniques and unprincipled hacks that you might try to make it computationally feasible. But just suppose for the moment we could. Imagine what that dendrogram would look like. At the root we’d have the most general anti-unification of our before-to-after pattern:

h0 —> h1

Which is plainly useless. At the leaves, we’d have all of the hundreds of thousands of edits, which are not useful in themselves. **But the nodes in the middle are pure gold! They are all the common patterns of code edits that get made in this code base, in a form that you could turn into a refactoring or automatic fix template. The higher in the tree they are, the more general they are.**

You’ve probably deduced by now that this is not a mere flight of fancy; I spent eight months working on a tiny research team to explore the question of whether this sort of analysis is possible at the scale of a modern large software repository, and I am pleased to announce that indeed it is!

We started with a small corpus of code changes that were made in response to a static analysis tool (Infer) flagging Java code as possibly containing a null dereference, built tools to extract the portions of the AST which changed, and then did a clustering anti-unification on the corpus to find patterns. (How the AST extraction works is also very interesting; we use a variation on the Gumtree algorithm. I might do a blog series about that later.) It was quite delightful that the first things that popped out of the clustering algorithm were patterns like:

h0.h1(); —> if (h0 != null) h0.h1();

h0.h1(); —> if (h0 == null) return; h0.h1();

h0.h1(); —> if (h0 == null) throw …; h0.h1();

if (h0.h1()) h2; —> if (h0 != null && h0.h1()) h2;

and a dozen more variations. You might naively think that removing a null dereference is easy, but there are a great many ways to do it, and we found most of them in the first attempt.

I am super excited that this tool works at scale, and we are just scratching the surface of what we can do with it. Just a few thoughts:

- Can it find patterns in bug fixes more complex than null-dereference fixes?
- Imagine for example if you could query your code repository and ask “
*what was the most common novel code change pattern last month?*” This could tell you if there was a large-scale code modification but the developer missed an example of it. Most static analysis tools are of the form “find code which fails to match a pattern”; this is a tool for finding new patterns*and*the AST operations that apply the pattern! - You could use it as signal to determine if there are new bug fix patterns emerging in the code base, and use them to drive better developer education.
- And many more;
**if you had such a tool, what would you do with it? Leave comments please!**

The possibilities of this sort of “big code” analysis are fascinating, and I was very happy to have played a part in this investigation.

I have a lot of people to thank: our team leader Satish, who knows everyone in the code analysis community, our management Erik and Joe who are willing to take big bets on unproven technology, my colleagues Andrew and Johannes, who hit the ground running and implemented some hard algorithms and beautiful visualizations in very little time, our interns Austin and Waruna, and last but certainly not least, the authors of the enormous stack of academic papers I had to read to figure out what combination of techniques might work at FB scale. I’ll put some links to some of those papers below.

- An Algorithm of Generalization in Positive Supercompilation (which lays out the anti-unification algorithm very concisely, though for a completely different purpose than I’ve used it for here.)
- Anti-unification algorithms and their applications in program analysis
- Fine-grained and Accurate Source Code Differencing
- History Driven Program Repair
- Learning Quick Fixes from Code Repositories
- Learning Syntactic Program Transformations from Examples
- Mining Fix Patterns For FindBugs Violations
- RTED: A Robust Algorithm for the Tree Edit Distance
- Static Automated Program Repair for Heap Properties

The function returns those three things, and they do not have any particular semantic connection to each other aside from being the solution to this problem, so let’s try returning them as a tuple.

This seems like a good place to try out nested functions in C# 7, since each rule is logically its own function, but also only useful in the context of the algorithm; there’s no real reason to make these private methods of the class since no other code calls them. Also, they’re logically manipulating the local state of their containing function.

We’ll start by setting up the initial state as being the most general generalization:

public static (Tree, Substitutions, Substitutions)

Antiunify(Tree s, Tree t)

{

var h = MakeHole();

var generalization = h;

var sSubstitutions = Substitutions.Empty.Add(h, s);

var tSubstitutions = Substitutions.Empty.Add(h, t);

Recall the first rule seeks situations where there is a substitution that is insufficiently specific. We want to go until no more rules apply, so we’ll have this return a Boolean indicating whether the rule was applied or not.

bool RuleOne()

{

var holes = from subst in sSubstitutions

let cs = subst.Value

let ct = tSubstitutions[subst.Key]

where cs.Kind == ct.Kind

where cs.Value == ct.Value

where cs.ChildCount == ct.ChildCount

select subst.Key;

var hole = holes.FirstOrDefault();

if (hole == null)

return false;

var sTree = sSubstitutions[hole];

var tTree = tSubstitutions[hole];

sSubstitutions = sSubstitutions.Remove(hole);

tSubstitutions = tSubstitutions.Remove(hole);

var newHoles =

sTree.Children.Select(c => MakeHole()).ToList();

foreach (var (newHole, child) in newHoles.Zip(

sTree.Children, (newHole, child) => (newHole, child)))

sSubstitutions = sSubstitutions.Add(newHole, child);

foreach (var (newHole, child) in newHoles.Zip(

tTree.Children, (newHole, child) => (newHole, child)))

tSubstitutions = tSubstitutions.Add(newHole, child);

generalization = generalization.Substitute(

hole, new Tree(sTree.Kind, sTree.Value, newHoles));

return true;

}

There is a small code smell here: tuples are value types, and so the “default” if there is no pair of holes that matches like this is (null, null), so that’s the condition that we’re using to check to see if the rules apply.

Notice that we’re using tuples to iterate over two sequences of equal size via zip. The code seems inelegant to me in a subtle way. The fundamental issue here is that C# has always had mutable tuples ever since version 1.0; it just called them “argument lists”, and that’s weird. It has always struck me as bizarre that C# requires you to pass an argument tuple, but that it gives you no syntax for manipulating that tuple in any way other than extracting the arguments from it or mutating them. You cannot treat what is logically a tuple as a tuple; instead you have to write code that explicitly constructs a real tuple out of the logical tuple, and end up writing what looks like it ought to be an identity:

(newHole, child) => (newHole, child)

For that matter, why do we need to zip at all? In this particular example it would be nice if the tuple syntax carried over into foreach loops; imagine if instead of that ugly zip code we could just write

foreach (var newHole, var child in newHoles, sTree.Children)

Zipping is only necessary here because the language lacks the feature of treating tuples as values consistently across the language. I’m hoping there will be further improvements in this area in C# 8.

But I digress. We’ve implemented the first rule, and the second is even more straightforward. Here we are looking for redundant holes and removing them:

bool RuleTwo()

{

var pairs =

from s1 in sSubstitutions

from s2 in sSubstitutions

where s1.Key != s2.Key

where s1.Value == s2.Value

where tSubstitutions[s1.Key] == tSubstitutions[s2.Key]

select (s1.Key, s2.Key);

var (hole1, hole2) = pairs.FirstOrDefault();

if (hole1 == null)

return false;

sSubstitutions = sSubstitutions.Remove(hole1);

tSubstitutions = tSubstitutions.Remove(hole1);

generalization = generalization.Substitute(hole1, hole2);

return true;

}

Quite fine. And now the outer loop of the algorithm is trivial. We keep applying rules until we are in a situation where neither applies.

while (RuleOne() || RuleTwo())

{ /* do nothing */ }

return (generalization, sSubstitutions, tSubstitutions);

}

It’s slightly distasteful to have RuleOne and RuleTwo useful for both their side effects and their values, but really their values are only being used for control flow, not for the value that was computed, so I feel OK about this.

Let’s try it out! Again we’ll make a couple of local helper functions:

static void Main()

{

Tree Cons(params Tree[] children) =>

new Tree(“call”, “cons”, children);

Tree Literal(string value) =>

new Tree(“literal”, value);

var one = Literal(“1”);

var two = Literal(“2”);

var three = Literal(“3”);

var nil = Literal(“nil”);

var s = Cons(Cons(one, two), Cons(Cons(one, two), nil));

var t = Cons(three, Cons(three, nil));

var (generalization, sSubstitutions, tSubstitutions) =

Tree.Antiunify(s, t);

Console.WriteLine(generalization);

Console.WriteLine(sSubstitutions.LineSeparated());

Console.WriteLine(tSubstitutions.LineSeparated());

}

And when we run it, we get the right answer:

cons(h1,cons(h1,nil))

cons(1,2)/h1

3/h1

Nice!

**Next time on FAIC:** Why is this useful?

- In the previous post I worked an example on
*function calls*; in this code, we’ll do the algorithm on*syntax trees*. Hopefully it is obvious that they’re equivalent. - As I prefer, I’ll work with immutable data structures whenever possible.
- This code is intended to illustrate the concepts; there are numerous places where it could be made faster or more memory-efficient. Those are left as exercises.
- There’s a small amount of boilerplate code because I want value equality on immutable trees. It’s irritating to write, but we’ll do it.
- WordPress turns quotation marks into “smart quotes” automatically and I don’t remember how to turn it off. VEXING.

Let’s get through the boring code quickly in this episode, and then we can look in more detail at the algorithm proper in the next episode. **As often is the case, if we get the boring boilerplate infrastructure right, then the algorithm reads very clearly.**

I want to be able to make new, unique “holes”; a class that is purpose-built to count off numbers is useful for that. I’m unlikely to have two billion holes, so the fact that it wraps around is irrelevant; I could always swap it out for longs if I had to.

internal sealed class Counter

{

private int count = 0;

public int Next()

{

int current = count;

count += 1;

return current;

}

}

Yes I know that `++`

exists. I do not like that thing.

I originally thought that I’d make a “substitution” type that is logically a Tree, Tree tuple, but then I realized that the only time I use substitutions is when looking them up in a collection of substitutions. I’ll therefore just use an immutable dictionary from trees to trees as my collection of substitutions, and the key-value pair as my substitution.

using Substitutions =

System.Collections.Immutable.ImmutableDictionary<Tree, Tree>;

internal static class Extensions

{

public static string LineSeparated(this Substitutions s)

=> string.Join(“\n”,

s.Select(kv => $”{kv.Value}/{kv.Key}“));

}

All right. Let’s get through the boring parts of making an immutable syntax tree that has value equality. We’ll say that a tree is characterized by three things: it has a kind, it has a value, and it has any number of ordered children. We’ll store the children in an array but ensure that it is never exposed and hence never mutated.

internal sealed class Tree

{

public string Kind { get; }

public string Value { get; }

private readonly Tree[] children;

public IEnumerable<Tree> Children =>

this.children.Select(c => c);

public int ChildCount => this.children.Length;

public Tree(string kind, string value, params Tree[] children)

: this(kind, value, (IEnumerable<Tree>)children)

{ }

public Tree(

string kind, string value, IEnumerable<Tree> children)

{

this.Kind = kind;

this.Value = value;

this.children = children.ToArray();

}

public static bool operator ==(Tree a, Tree b) =>

ReferenceEquals(a, b) || !(a is null) && a.Equals(b);

public static bool operator !=(Tree a, Tree b) => !(a == b);

public override bool Equals(object obj) =>

obj is Tree t &&

t.Kind == this.Kind &&

t.Value == this.Value &&

t.Children.SequenceEqual(this.Children);

public override int GetHashCode() =>

HashCode.Combine(this.Kind, this.Value,

this.children.Aggregate(0, HashCode.Combine));

I am loving the “is” patterns but C# really needs `a !is null`

or `a is not null`

or something like that. This `!(a is null)`

is ugly. Of course I cannot use `a != null`

— do you see why? I’d have to use ReferenceEquals. There is an opportunity here for a more general feature of “match the negation of this pattern”.

Printing out trees is straightforward; we’ll just print them out in their function call form:

public override string ToString() =>

this.ChildCount == 0 ?

this.Value :

$”{this.Value}({string.Join<Tree>(‘,’, this.children)})”;

Given a substitution, what is the tree after the substitution is applied?

public Tree Substitute(Tree original, Tree replacement) =>

this == original ?

replacement :

new Tree(this.Kind, this.Value, this.Children.Select(

e => e.Substitute(original, replacement)));

Easy peasy. Finally, I want a factory for holes:

private static readonly Counter counter = new Counter();

public static Tree MakeHole() =>

new Tree(“hole”, $”h{counter.Next()}“);

And that does it for the boring boilerplate code.

**Next time on FAIC:** let’s implement anti-unification for real.

s is cons(cons(1, 2), cons(cons(1, 2), nil)) t is cons(3, cons(3, nil))

So our initial condition is:

g = h0 ss = { cons(cons(1, 2), cons(cons(1, 2), nil)) / h0 } st = { cons(3, cons(3, nil) / h0 }

Now we notice that rule 1 can be applied; there are two `cons`

expressions both substituted for `h0`

, so we move the `cons`

into `g`

and make the substitutions on the arguments to `cons`

:

g = cons(h1, h2) ss = { cons(1,2) / h1, cons(cons(1, 2), nil) / h2 } st = { 3 / h1, cons(3, nil) / h2 }

Super. Now we notice that rule 1 applies again: we have `cons`

expressions both substituted for `h2`

, so we move the `cons`

into `g`

:

g = cons(h1, cons(h3, h4)) ss = { cons(1, 2) / h1, cons(1, 2) / h3, nil / h4 } st = { 3 / h1, 3 / h3, nil / h4 }

We are now in a situation where both rules apply.

Rule 1 applies because we can think of `nil`

as being `nil()`

— that is, a call that has exactly zero children. Thus there are two `nil`

expressions both substituted for `h4`

, so we can move the `nil`

into `g`

, and introduce zero new holes for the zero arguments.

Rule 2 applies because `h1`

and `h3`

are redundant.

One of the nice things about this algorithm is that it doesn’t matter what order you apply the rules in; you always make progress towards the eventual goal. Let’s apply rule 1:

g = cons(h1, cons(h3, nil)) ss = { cons(1, 2) / h1, cons(1, 2) / h3 } st = { 3 / h1, 3 / h3 }

Rule 1 no longer applies, but rule 2 does. `h1`

and `h3`

are still redundant. Get rid of `h1`

:

g = cons(h3, cons(h3, nil)) ss = { cons(1, 2) / h3 } st = { 3 / h3 }

No more rules apply, and we’re done; we’ve successfully deduced that the most specific generalization of `s`

and `t`

is `cons(h3, cons(h3, nil))`

and given substitutions that produce `s`

and `t`

.

**Next time on FAIC:** Let’s implement it! And maybe we’ll take a look at a few new features of C# 7 while we’re at it.

`s`

and `t`

, either in the form of operators, or method calls, or syntax trees, doesn’t matter, find the most specific generalizing expression `g`

, and substitutions for its holes that gives back `s`

and `t`

. Today I’ll sketch the algorithm for that in terms of function calls, and next time we’ll implement it on syntax trees.
There are papers describing the first-order anti-unification algorithm that go back to the 1970s, and they are surprisingly difficult to follow considering what a simple, straightforward algorithm it is. Rather than go through those papers in detail, rather here’s a sketch of the algorithm:

Basically the idea of the algorithm is that we start with the most general possible anti-unification, and then we gradually refine it by repeatedly applying rules that make it more specific and less redundant. When we can no longer improve things, we’re done.

So, our initial state is:

g = h0 ss = { s / h0 } st = { t / h0 }

That is, our generalization is just a hole, and the substitutions which turn it back into `s`

and `t`

are just substituting each of those for the hole.

We now apply three rules that transform this state; we keep on applying rules until no more rules apply:

**Rule 1: **If `ss`

contains `f(s1, ... sn)/h0`

and `st`

contains `f(t1, ... tn)/h0`

then `h0`

is **not specific enough.**

- Remove the substitution from
`ss`

and add substitutions`s1/h1, ...sn/hn`

. - Remove the substitution from
`st`

and add`t1/h1, ... tn/hn`

- Replace all the
`h0`

in`g`

with`f(h1, ... hn)`

**Rule 2: **If `ss`

contains `foo/h0`

and `foo/h1`

, and `st`

has `bar/h0`

and `bar/h1`

, then `h0`

and `h1`

are **redundant**. (For arbitrary expressions foo and bar.)

- Remove
`foo/h0`

and`bar/h0`

from their substitution sets. - Replace
`h0`

in`g`

with`h1`

.

**Rule 3: **If `ss`

and `st`

both contain `foo/h0`

then `h0`

is **unnecessary**.

- Remove the substitution from both sets.
- Replace all
`h0`

in`g`

with`foo`

.

These rules are pretty straightforward, but if we squint a little, we can simplify these even more! Rule 3 is just an optimization of repeated applications of rule 1, provided that we consider an expression like “nil” to be equivalent to “nil()”. This should make sense; a value is logically the same as a function call that takes no arguments and always returns that value. So in practice, we can eliminate rule three and just apply rules one and two.

That’s it! Increase specificity and remove redundancy; keep doing that until you cannot do it any more, and you’re done. Easy peasy. The fact that this algorithm converges on the best possible solution, and the fact that you can apply the rules in any order and still get the right result, are facts that I’m not going to prove in this space.

**Next time on FAIC:** We’ll work an example out by hand, just to make sure that it’s all clear.

I got into this series of posts because I wanted to talk about anti-unification, and it is hard to say what anti-unification is if you don’t know what unification is. Anti-unification is kinda, sorta, but not *exactly* the opposite problem. The anti-unification problem is:

Given two input expressions (often, but not necessarily without holes), call them `s`

and `t`

, find a *generalizing* expression `g`

that does have holes, and two substitutions; the first substitution makes the result expression equal to the first input, and the second substitution makes it equal to the second input.

Let’s look at an example. Suppose are inputs are

s is (1 :: 2) :: (1 :: 2) :: nil t is 3 :: 3 :: nil

If you don’t like the `::`

operator, we could equivalently think of these as function calls:

s is cons(cons(1, 2), cons(cons(1, 2), nil)) t is cons(3, cons(3, nil))

Or, if you prefer, as trees:

s is cons t is cons / \ / \ cons cons 3 cons / \ / \ / \ 1 2 cons nil 3 nil / \ 1 2

and the question is: what is an expression with holes such that there are substitutions for the holes that make both s and t? Plainly the generalizing expression is

cons(h0, cons(h0, nil)

and the substitutions are `cons(1, 2) / h0`

to make `s`

and `3 / h0`

to make `t`

. (Recall that our standard notation for substitutions is to separate the value being substituted and the hole to substitute by a slash.)

You might have noticed that the unification problem might have no solution; there might be no possible set of substitutions for all the variables that make the statements all true. But the anti-unification problem always has at least one solution: the generalized expression `h0`

always works, and of course the substitutions are `s/h0`

and `t/h0`

.

Thus we need to make the anti-unification problem a little bit harder for it to be interesting: we want the *most specific generalization*, not just any generalization.

It was pretty clear that unification on equations could be generalized to multiple equations. Similarly, I hope it is clear that anti-unification on two expressions can be generalized to any number of expressions; we want the generalizing expression and a substitution for each input expression.

And of course just as there was *higher-order unification* for unification problems involving lambdas, and *unification modulo theory* for problems involving arithmetic or other mathematical ideas, there are similar variations on anti-unification. For our purposes we’ll consider only first-order anti-unifiction, which is the easy one.

Why is anti-unification useful? I’ll get into that in a later post. Now that we know what first-order anti-unification is, can we devise and implement an algorithm that takes in expressions and gives us the most specific anti-unifying expression?

**Next time on FAIC:** we’ll sketch out the algorithm.

Sure, why not? Let’s look at an example typical to functional languages.

In many functional languages we have the binary operator `head :: tail`

, which takes two values; the head of a list, say, a number, and the tail, say, a list of numbers, possibly empty. The result is a new list of numbers. `nil`

is an expression that is used for the empty list. Suppose we make this equation

`h0 :: h0 :: nil`

= `2 :: h1`

Again, `h0`

and `h1`

are variables in the mathematical sense, not in the sense of elements of a programming language, and similarly, the equality is logical equality, not an operator in any programming language. From now on I’m going to label these “holes” with the letter h and a number to disambiguate them, so that it is clear when I am talking about a hole that needs something filled into it.

What we’re looking for here is “what expressions could we put into `h0`

and `h1`

that would make these two program fragments mean the same thing?” The only values are to substitute `2`

for `h0`

and `2 :: nil`

for `h1`

, which gives us `2 :: 2 :: nil`

on both sides.

Maybe the answer was immediately obvious to you, and maybe it was not, but regardless it is not 100% clear whether there is a straightforward algorithm that gives you values for the holes when presented with such a unification problem.

The syntax might be throwing us off a bit, and it’s not very general. What if instead of an operator `::`

we just called a function like we do in C#, a function that took a head and a tail? Such a function is traditionally called “cons”, because it constructs a list:

`cons(h0, cons(h0, nil))`

= `cons(2, h1)`

Again, remember that we are looking for *expressions* that we can sub in for the holes that make the equality true.

In this form it is a bit easier to see the algorithm. The first argument to `cons`

on the left is `h0`

, the first argument on the right is `2`

, so `h0`

must be `2`

, and perhaps you see how it goes from there.

We could also model this as a parse tree.

cons cons / \ / \ h0 cons = 2 h1 / \ h0 nil

I find that with the tree diagram it becomes much easier to see what the solution is. I’m not going to go through the whole algorithm here, but maybe you see how it goes. Basically, we recurse through the tree, solving unification problems on the children, and then we check the results to see if there were any contradictions.

This algorithm that I’ve failed to adequately describe is called the *first-order unification algorithm*, and it can solve any simple unification problem in linear time; it produces a substitution if one exists or fails if it does not.

What do I mean by “simple” unification problem? Well, notice that this simple algorithm would fail if given the problem

`add(h0, 10)`

= `add(13, 1)`

Because `h0`

would unify with `13`

, but `10`

would not unify with `1`

, and we’d get a unification failure, even though obviously there is a value for `h0`

that makes the equation true. Unification algorithms that understand additional facts about arithmetic or other domains are “unification modulo theory”.

Similarly, unification algorithms that involve the lambda abstraction are “higher order unification” algorithms. The higher-order unification problem is in general not solvable, but that’s a topic for another day.

Similarly, it is I hope easy to see how unifying one equation generalizes to multiple equations; you just unify on each equation, and then check to see if you deduced contradictions, and then combine the solutions together.

All of this was meant to introduce what I really want to talk about, which is antiunification. If unification is the process of finding values of variables that induce certain statements to be all true, what on earth could the *opposite* of that be?

**Next time on FAIC:** We’ll find out.

“Unification” is just a fancy word for “we have a set of statements that contain placeholder variables; find values for those variables that make every statement true”. That is to say, unification is a generalization of the problem of “solve this equation for x”.

The classic example from high school algebra is the system of linear equations, where we have something like

X + Y = 10 X - Y = 2

(As I might have noted before, we call these placeholders “variables”, though they bear only a small similarity to what we call “variables” in C# — that is, typed, mutable storage locations. As we did in the previous series, where we discussed “free” and “bound” type variables, I’ll continue to use “variable” in this traditional mathematical sense of a placeholder for a value to be supplied later via substitution, and not in the C# programming sense.)

The unification problem is to find values for X and Y such that the statements are both true; in this case if we substitute 6 for X and 4 for Y, both statements are true. Moreover, this is the unique such substitution; of course there are systems of equations where there are multiple substitutions, infinitely many substitutions, or no substitutions at all; in the latter case we say that unification fails.

A unification algorithm is one which takes in a set of statements with variables and returns substitutions for the variables that makes every statement true. As we know, the algorithm for unifying systems of linear equations on real numbers is straightforward, but as we make those equations more complicated — by restricting them to integers, by making them non-linear, and so on — then the algorithms get more and more complicated.

In our type system example, the statements which we wish to be true are called “assumptions”. An assumption can contain a type variable, which we denoted with a Greek letter. Our type inference algorithm’s goal was to use type unification to find a set of substitutions for those variables which makes the assumptions all true. An example that I gave was an assumption:

plus : int -> int -> int

And we have an expression we wish to assign a type to:

λx.(plus x x)

We add to our assumptions an assumption that contains a variable:

x:β

And then we run our type inference algorithm on the body of the lambda. It returns a substitution — that β is int, and therefore so is x, and now we have enough information to deduce that our expression is a function from int to int.

Here’s an interesting question. We’ve seen how to unify systems of linear equations, and we’ve seen how to unify types. Can we unify *arbitrary expressions in a programming language?*

**Next time on FAIC:** Yes, yes we can.

As I’m sure you’ve deduced, I batched up a whole long series of blog articles when things were relatively calm, knowing that things were about to get busy. I posted them gradually, and intended to finish up those series when things calmed down a little again, and then I stayed *super busy*!

I’m still super busy (and having a great time doing interesting work) but a large number people have asked me recently when I’m going to start blogging again. And I do miss it. So I figure that I’ll try to post more often than I have been, but not necessarily more regularly.

Though I do hope to someday finish those long series on Z-machines and type inference that I started way back when, it’s been a very busy year and I have learned a tremendous amount that I’d love to share with you all. Thus I think I’ll start with some posts that are more like the classic days of Fabulous Adventures In Coding, where I’ll talk about some of the data structures and algorithms I’ve had to learn about and implement recently.

Next time on FAIC: My previous long series of blog posts discussed *type unification* in functional languages, but horrors, I never actually defined the word “unification”. And what on earth is *anti-unification*? We’ll find out!

(iv) If e is let x = e_{1}in e_{2}then let W(A, e_{1}) = (S_{1}, τ_{1}) and ____ W(S_{1}A_{x}∪ {x:S_{1}A(τ_{1}), e_{2}) = (S_{2}, τ_{2}); then S = S_{2}S_{1}and τ = τ_{2}.

Again let’s look at an example. Let’s say we want to infer a type for

let double_it = λx.(plus x x) in double_it foo

where we have `plus`

as before, and `foo:int`

in our set of assumptions. We begin by recursing on `λx.(plus x x)`

with the current set of assumptions. Note that we do *not* want to remove `double_it`

from our set of assumptions if it is there because the expression might use `double_it`

, meaning some *other* definition. If it does, then it uses the definition already in the assumptions; the “new” `double_it`

is only in scope *after* the `in`

.

Type inference tells us that the expression is of type `int → int`

, so we add an assumption `double_it:int → int`

to our set of assumptions, and compute the type of `double_it foo`

under the new set of assumptions, which of course says that it is `int`

. Since the type of a `let`

expression is the type of the second expression, we’re done.

The reason we need the closure is: suppose type inference tells us that the first expression is of type `β→β`

, and `β`

is free in `A`

. (Remember, we always get a type, not a type scheme.) For the purposes of inferring the type of the right side of the `in`

, we want the expression to have type scheme `∀β β→β`

.

NOTE: When any of the conditions above is not met W fails.

Though this is true, I think it would have been better to call out the two places in the algorithm where the failure can occur. They are in rule (i), which requires that an identifier have a type in the set of assumptions, and in rule (ii) which requires that type unification succeeds. Rules (iii) and (iv) never fail of their own accord; they only fail when the recursive application of W fails.

Next time: more sketch proofs!

]]>