Excessive explanation, part nine

Last time we defined the grammar of our language “Exp”, which consists of identifiers, lambdas, function application and let expressions. This is a programming language. We also need a different language to formally describe types and “type schemes”. The paper says:

Note that types are absent from the language Exp. Assuming a set of
type variables α and of primitive types ι, the syntax of types τ and of
type-schemes σ is given by

τ ::= α | ι | τ → τ
σ ::= τ | ∀α σ

The high order bit here is that the language Exp is just expressions; we have no ability to declare new types in this language. So we assume for the purposes of type inference that there is an existing set of primitives — int, string, bool, whatever — that we can use as the building blocks. What those primitives are does not matter at all because the type inference algorithm never concerns itself with the difference between int and bool. The primitive types are meaningless to the type inference algorithm; all that matters is that we can tell them apart.

The first line in the grammar gives the syntax for types; a type tau can be one of three things.

  • A type can be a generic “type variable” — the T in List<T> in C#, for example — that we will notate alpha, beta, and so on.
  • A type can be one of a set of “primitive” types: int, string, whatever. Doesn’t matter.
  • The only other kind of type is “function which takes a type and returns a type”. That is, a function of one argument, with an argument type and a return type, is a type.

Just as they did with the Exp language, the authors have omitted parentheses from the language of types. Unlike with the Exp language, they neglected to call out that they were doing so.

The second line in the grammar gives the syntax for type schemes.

What’s a type scheme? Recall earlier that we said that a type scheme is a bit like a generic type. More formally, a type scheme is either just a plain type (including possibly a type variable), or “for any type alpha there exists a type such that…”

A type-scheme ∀α1...∀αn τ (which we may write ∀α1...αn τ) has generic
type variables α1...αn.

I’ve never liked “type variables” — I think variables should, you know, vary. The C# spec calls these type parameters, which is much more clear; a parameter is given a value by substituting an argument for it. But that’s the next episode!

Anyways, this sentence is just pointing out that for notational convenience we’ll omit the universal quantifiers in the cases where we have a large number of type variables in a row. But we will always have the quantifiers on the left hand side of the schema. A quantifier never appears in a type, only in a type schema.

Something important to realize here is that types and type schemes are just another syntax for a language, like “Exp”, our expression language. A “type” in this conception is simply a string of symbols that matches the grammar of our type language.

An interesting point to call out here: note that this type system has no generic “classes”. There is no “list of T” in this system. We’ve got primitive types, we’ve got function types, and we’ve got generic functions, and that’s it.

Again, this is because we don’t need anything more complicated than generic functions in order to make type inference interesting. If we can solve the problem of inference in a world with generic functions, we can very easily extend the inference to generic types. So let’s keep it simple and not introduce generic types into our core language.

A monotype μ is a type containing no type variables.

This just introduces a new jargon term, to distinguish generic types from non-generic types.

And that’s it for section two; that was a lot shorter than the introduction.

Next time: as promised, type substitution.

Advertisements

Excessive explanation, part two

We continue with my excessively detailed explanation of the seminal paper on the ML type inference system…

1 Introduction

This paper is concerned with the polymorphic type discipline of ML, 
which is a general purpose functional programming language, although 
it was first introduced as a metalanguage (whence its name) for 
constructing proofs in the LCF proof system. [4]

These bracketed numbers are not footnotes; they are references to other papers. See the original paper’s list of references if you want to follow up on these references.


By “type discipline” we basically mean the same thing as “type system”. That is, there is some way of determining the “type” of the expressions in a programming language, and detecting when the programs are erroneous due to violations of the typing rules.

By “polymorphic” here we mean what a C# programmer would mean by “generic”. There are many kinds of polymorphism in computer programming. “Subtype polymorphism” is what object-oriented programmers typically think of when someone says “polymorphism”:

void M(Animal a) { ... }
...
Giraffe g = new Giraffe();
M(g); // No problem.

That’s not the kind of polymorphism we’re talking about here. Rather, we’re talking about:

void N<T>(List<T> list) { ... }
...
List<int> g = ...;
N(g); // No problem.

This is often called “parametric polymorphism”, because the method takes a “type parameter”.

ML has parametric polymorphism, not subtype polymorphism.


A metalanguage is a language used to implement or describe another language. In this case, ML was first created to be the metalanguage for LCF, “Logic for Computable Functions”. The purpose of LCF was to automatically find proofs for mathematical theorems; you could write little programs in ML that described to the LCF system various strategies for trying to get a proof for a particular theorem from a set of premises. But as the paper notes, ML and its descendants are now general-purpose programming languages in their own right, not just implementation details of another language.


The type discipline was studied in [5] where it was shown to be 
semantically sound, in a sense made precise below, but where one 
important question was left open: does the type-checking algorithm — 
or more precisely the type assignment algorithm (since types 
are assigned by the compiler, and need not be mentioned by the 
programmer) — find the most general type possible for every expression 
and declaration?

As the paper notes, we’ll more formally define “sound” later. But the basic idea of soundness comes from logic. A logical deduction is valid if every conclusion follows logically from a premise. But an argument can be valid and come to a false conclusion. For example “All men are immortal; Socrates is a man; therefore Socrates is immortal” is valid, but not sound. The conclusion follows logically, but it follows logically from an incorrect premise. And of course an invalid argument can still reach a true conclusion that does not follow from the premises. A valid deduction with all true premises is sound.

Type systems are essentially systems of logical deduction. We would like to know that if a deduction is made about the type of an expression on the basis of some premises and logical rules, that we can rely on the soundness of those deductions.


The paper draws a bit of a hair-splitting distinction between a type checking algorithm and a type assignment algorithm. A type checking algorithm verifies that a program does not violate any of the rules of the type system, but does not say whether the types were added by the programmer as annotations, or whether the compiler deduced them. In ML, all types are deduced by the compiler using a type assignment algorithm. The question is whether the type assignment algorithm that the authors have in mind finds the most general type for expressions and function declarations.

By “most general” we mean that we want to avoid situations where, say, the compiler deduces “oh, this is a method that takes a list of integers and returns a list of integers”, when it should be deducing “this is a method that takes a list of T and returns a list of T for any T”. The latter is more general.

Why is this important? Well, suppose we deduce that a method takes a list of int when in fact it would be just as correct to say that it takes a list of T. If we deduce the former then a program which passes a list of strings is an error; if we deduce the latter then it is legal. We would like to make deductions that are not just sound, but also that allow the greatest possible number of correct programs to get the stamp of approval from the type system.


Here we answer the question in the affirmative, for the purely 
applicative part of ML. It follows immediately that it is decidable 
whether a program is well-typed, in contrast with the elegant and 
slightly more permissive type discipline of Coppo. [1]

This is a complicated one; we’ll deal with this paragraph in the next episode!

Functional programming for beginners

Happy New Year everyone; I hope you had a pleasant and relaxing festive holiday season. I sure did.

I’m starting the new year off by giving a short — an hour long or so — talk on how you can use ideas from functional programming languages in C#. It will broadcast this coming Wednesday, the 13th of January, at 10AM Pacific, 1PM Eastern.

The talk will be aimed straight at beginners who have some experience with OOP style in C# but no experience with functional languages. I’ll cover ways to avoid mutating variables and data structures, passing functions as data, how LINQ is functional, and some speculation on future features for C# 7.

There may also be funny pictures downloaded from the internet.

Probably it will be too basic for the majority of readers of this blog, but perhaps you have a friend or colleague interested in this topic.

Thanks once again to the nice people at InformIT who are sponsoring this talk. You can get all the details at their page:

http://www.informit.com/promotions/webcast-a-beginners-guide-to-functional-programming-141067

In related news, I am putting together what will be a very, very long indeed series of blog articles on functional programming, but not in C#. Comme c’est bizarre!

Foreshadowing: your sign of a quality blog. I’ll start posting… soon.

The dedoublifier, part four

I said last time that binary-searching the rationals (WOLOG between zero and one) for a particular fraction that is very close to a given double does not really work, because we end up with only fractions that have powers of two in the denominators. We already know what fraction with a power of two in the denominator is closest to our given double; itself!

But there is a way to binary-search the rationals between zero and one, as it turns out. Before we get into it, a quick refresher on how binary search works:
Continue reading

The dedoublifier, part three

All right, we have an arbitrary-precision rational arithmetic type now, so we can do arithmetic on fractions with confidence. Remember the problem I set out to explore here was: a double is actually a fraction whose denominator is a large power of two, so fractions which do not have powers of two in their denominators cannot be represented with full fidelity by a double. Now that we have this machinery we can see exactly what the representation error is:
Continue reading

The dedoublifier, part two

A couple of years ago I developed my own arbitrary precision natural number and integer mathematics types, just for fun and to illustrate how it could be done. I’m going to do the same for rational numbers here, but rather than using my integer type, I’ll just use the existing BigInteger type to represent the numerator and denominator. Either would do just fine.

Of course I could have used some existing BigRational types but for various reasons that I might go into in a future blog article, I decided it would be more interesting to simply write my own. Let’s get right to it; I’ll annotate the code as I go.
Continue reading