We have four kinds of things in Exp: identifiers, function calls, lambdas and lets, and we have four rules in type inference. Finally, the last recursive rule of the type inference algorithm:
(iv) If e is let x = e1 in e2 then let W(A, e1) = (S1, τ1) and ____ W(S1 Ax ∪ {x:S1 A(τ1), e2) = (S2, τ2); then S = S2 S1 and τ = τ2.
Again let’s look at an example. Let’s say we want to infer a type for
let double_it = λx.(plus x x) in double_it foo
where we have plus
as before, and foo:int
in our set of assumptions. We begin by recursing on λx.(plus x x)
with the current set of assumptions. Note that we do not want to remove double_it
from our set of assumptions if it is there because the expression might use double_it
, meaning some other definition. If it does, then it uses the definition already in the assumptions; the “new” double_it
is only in scope after the in
.
Type inference tells us that the expression is of type int → int
, so we add an assumption double_it:int → int
to our set of assumptions, and compute the type of double_it foo
under the new set of assumptions, which of course says that it is int
. Since the type of a let
expression is the type of the second expression, we’re done.
The reason we need the closure is: suppose type inference tells us that the first expression is of type β→β
, and β
is free in A
. (Remember, we always get a type, not a type scheme.) For the purposes of inferring the type of the right side of the in
, we want the expression to have type scheme ∀β β→β
.
NOTE: When any of the conditions above is not met W fails.
Though this is true, I think it would have been better to call out the two places in the algorithm where the failure can occur. They are in rule (i), which requires that an identifier have a type in the set of assumptions, and in rule (ii) which requires that type unification succeeds. Rules (iii) and (iv) never fail of their own accord; they only fail when the recursive application of W fails.
Next time: more sketch proofs!
It’s kinda amusing that all this “substitute fresh variables instead of all the generalized ones”—”generalize all the free variables”—”repeat again” business is mainly because the authors didn’t want to deal with the generalized *types* — so they had to convert type schemes to types and re-genralized types back into typeschemes at every subexpression.
But at least they didn’t have to deal instead with the full polymorphic lambda calculus of Reynolds, that’s a plus!
I’m thinking that Eric’s gotten busy again 🙂 Anxiously awaiting part 25 and beyond!
Still waiting for Eric to finish the Z-Machine implementation from last year!
Six months later and still watching this space for updates 🙂
Hope all’s well!
Well, that’s a year.
I also came here to say this.
Hope everything is going well Eric!
He continues to post on Stack Overflow so at the very least he’s still alive.
https://stackoverflow.com/users/88656/eric-lippert?tab=answers&sort=newest