# Excessive explanation, part twenty-one

All right; we have a deductive system whereby the types of expressions can be justified with derivations. What we want is the ability to take an expression and the typing assumptions of an environment and deduce the type scheme of that expression:

```6 The type assignment algorithm W

The type inference system itself does not provide an easy
method for finding, given A and e, a type-scheme σ such that
A ⊢ e:σ
```

Bummer!

```We now present an algorithm W for this purpose. In
fact, W goes a step further. Given A and e, if W succeeds it
finds a substitution S and a type τ, which are most general
in a sense to be made precise below, such that

S A ⊢ e:τ.
```

Note that this is a bit subtle. Our inputs are a set of assumptions A — what are the type schemes of all the relevant identifiers? — and an expression e, whose type we wish to know. Our outputs are a substitution — what types should some of the type variables have? — and a type. Not, interestingly, a type scheme; a type.

```To define W we require the unification algorithm of Robinson .

Proposition 3 (Robinson). There is an algorithm U which, given
a pair of types, either returns a substitution V or fails; further

(i) If U(τ,τ') returns V, then V unifies τ and τ', i.e. V τ = V τ'.

```

Note that there was a typo in the PDF of the paper here; it said `V τ = τ'`. I’ve corrected it here.

```
(ii) If S unifies τ and τ' then U(τ,τ') returns some V and
there is another substitution R such that S = RV .

Moreover, V involves only variables in τ and τ'.
```

What does this mean?

The first claim is the important one. Suppose we have two types. Let’s say `β→β` and `int→int`. The unification algorithm would find a substitution on the two types that makes them equal; obviously the substitution is `[int/β]`. That’s an easy one; you can I’m sure see how these could get quite complicated.

A slightly more complicated example illustrates that both types might need substitution: Consider unifying `int → γ` with `β → β`. The substitution `V = [int/β, int/γ]` does make them equal to each other under substitution.

The unification algorithm can return “there is no such substitution” of course. There is no substitution that can turn `β→β` into `int→(int→int)`, for example.

I’m not going to go through the unification algorithm in detail; perhaps you can deduce for yourself how it goes. I will however say that unification of types in the ML type system is simple compared to unification in a type system like C#’s. In C# we must solve problems like “I have an `IEnumerable<Action<Tiger, T>>` and I need a unification with `List<blah blah blah...` and so all the mechanisms of covariant and contravariant reference conversions come into play. In the C# type inference algorithm we not only need to find substitutions, we need to put bounds on the substitutions, like “substitute Mammal-or-any-subtype for T”.

The second claim notes that the substitution that unifies two types may not be unique, but that this fact doesn’t matter. That is, if V is a substitution that unifies two types found by the algorithm, and S is a different substitution that also unifies the types, then there exists a way to transform V into S.

```We also need to define the closure of a type τ with respect to
assumptions A;

_
A(τ) = ∀α1, ... ,αn τ

where α1, ... ,αn  are the type variables occurring free τ in
but not in A.
```

I am not super thrilled with this notation.

Basically we are saying here that given a set of assumptions A and a type τ that might contain free type variables, we can identify the free type variables not already free in A, and quantify them to make an equivalent type scheme.

Now, we’ve already said that there is a function U that takes two types and produces a substitution, and we’re going to say that there is a function W that takes a set of assumptions and an expression and produces a substitution and a type. So why on earth do the authors not simply say “there is a function B that takes a set of assumptions and a type and produces a type scheme” ???

I genuinely have no idea why authors of these sorts of papers do not adopt a consistent approach to functions and their notation. Rather than taking a straightforward approach and notating it `B A τ` or `B(A,τ)`, they notate it as `A-with-a-bar-on-top(τ)`, for no good reason I am aware of.

I started this series because I am asked fairly frequently about how to read the notation of these papers, and boy, do they ever not make it easy. Even a little thing like whether function applications are notated `U(τ,τ')` as the unification function is notated, or `S τ` as substitutions are notated, is frequently inconsistent. And now we have yet another kind of function, this time notated as, of all things, a bar over what ought to be one of the arguments.

Sigh. I’ll stick with the notation as it is describe in the paper.

Next time: at long last, the base case of algorithm W!

## 4 thoughts on “Excessive explanation, part twenty-one”

• Oh good. Thanks! I’ll update the text to indicate that this is in fact a typo.

1. Joker_vD on said:

Maybe the authors didn’t want any function to return a type scheme, those things are second-class citizens, after all! 🙂

And yeah, A-bared-of-tau for “closure of tau w.r.t. A” is really wonky. It looks like the closure of A is being fed a tau, which is not what actually happens at all.

As for inconsistent use of parens… it’s a long tradition. Initially, fx (without parens) *was* how you wrote function application (the leftover of this tradition can be seen with trigonometric functions), and when you needed to group an expression into an argument, you used an overbar — you can still see one used in square root. Then it changed, but not entirely. It’s the same thing as in programming, I guess: styles change drastically over time, but some traditions are kept intact, and it’s mostly unpredictable what’ll change and what remain.