Ok, addition and multiplication are in the can. Those are the easy ones because the set of natural numbers is "closed" over those operations. That is, if you have any two naturals then both their sum and their product is also a natural. Not so subtraction! To see why, first we have to state what subtraction really is.

Subtraction relates three numbers: the minuend and the subtrahend, which are given, and the difference, which is the result. When we say `m - s = d`

what we are really saying is that `d`

is the solution to the equation `m = s + d`

. But for, say, `m`

as 2 and `s`

as 3 there is no solution `d`

which is still a natural. The integers are closed over subtraction, but the naturals are not. We're going to need to add an error case.

As should be expected by now we start by enforcing non-null arguments at the entrypoint:

public static Natural operator -(Natural x, Natural y) { if (ReferenceEquals(x, null)) throw new ArgumentNullException("x"); else if (ReferenceEquals(y, null)) throw new ArgumentNullException("y"); else return Subtract(x, y); }

OK, now we can make our base cases and recursive cases for this recursive algorithm. The base cases are as follows:

- We can start with a cheap early out; if the operands are reference equal to each other then their difference is zero.
- If the subtrahend is zero then the result is the minuend.
- If the minuend is zero and the subtrahend is non zero then throw an exception.

Now we can break down the recursive cases. Again, there are numerous ways to do this. The one I chose is as follows:

First, if the least significant bit of both `x`

and `y`

is the same, call it `head`

, then:

x - y = { xtail : head } - { ytail : head } = 2 * xtail + head - (2 * ytail + head) = 2 * xtail + head - 2 * ytail - head = 2 * (xtail - ytail) = { xtail - ytail : 0 }

Second, if the head of `x`

is 1 and the head of `y`

is 0:

x - y = { xtail : 1 } - { ytail : 0 } = 2 * xtail + 1 - (2 * ytail + 0) = 2 * xtail + 1 - 2 * ytail = 2 * (xtail - ytail) + 1 = { xtail - ytail : 1 }

Third, if the head of `x`

is 0 and the head of `y`

is 1:

x - y = { xtail : 0 } - { ytail : 1 } = 2 * xtail + 0 - (2 * ytail + 1) = 2 * xtail - 2 - 2 * ytail - 1 + 2 = 2 * (xtail - 1 - ytail) + 1 = { xtail - 1 - ytail : 1 }

This is usually characterized as "borrowing", but as I mentioned in the episode on addition, I prefer to simply reason about it algebraically rather than thinking about the conventions of pencil-and-paper arithmetic. Let's write the code:

private static Natural Subtract(Natural x, Natural y) { if (ReferenceEquals(x, y)) return Zero; else if (ReferenceEquals(y, Zero)) return x; else if (ReferenceEquals(x, Zero)) throw new InvalidOperationException("Cannot subtract greater natural from lesser natural"); else if (x.head == y.head) return Create(Subtract(x.tail, y.tail), ZeroBit); else if (x.head == OneBit) return Create(Subtract(x.tail, y.tail), OneBit); else return Create(Subtract(Subtract(x.tail, One), y.tail), OneBit); }

And of course we can now add a decrement operator; see my earlier post about increment operators to understand why this is so straightforward:

public static Natural operator --(Natural x) { if (ReferenceEquals(x, null)) throw new ArgumentNullException("x"); else if (ReferenceEquals(x, Zero)) throw new InvalidOperationException(); else return Subtract(x, One); }

**Next time on FAIC:** we'll change gears for a bit and talk about equality and inequality, and then move on to division and remainder operators.

And for those that are interested in a matching F# implementation I updated my F# sample accordingly:

https://github.com/mrange/experiments/blob/master/fun/numerals/elnumerals/Program.fs

If it were me, I would have used the introduction of subtraction as a cue to extend the number class from natural to integer. No need for exceptions for well-formed calls.

Equally, when division is introduced would be time to write a rational class, although there would still need to be an exception for that pesky division-by-zero case.

I guess that my approach would not lend itself to writing a full-featured natural class. In your approach, on the other hand, I'm interested to see how your integer class will look - if it just extends natural, it would presumably still have error checking for negatives hidden away somewhere, which seems messy.

On the grip hand, a full-featured natural class is all you need for Gödel's Theorem, which I was rabbitting on about a few posts back.

> I'm interested to see how your integer class will look - if it just extends natural, it would presumably still have error checking for negatives hidden away somewhere, which seems messy.

Well, inheritance is really a bad choice for modelling this in the first place - it doesn't make sense for the Naturals to be a superclass of the Integers because ... they're not There are no Naturals that are not Integers, while there are Integers that are not Naturals. The only way an inheritance heirarchy makes sense is if you start with, say, the Reals at the top, and then move down to the Rationals, then the Integers, and then finally the Naturals at the bottom of the inheritance tree. But while that obeys substitutability, it's not useful in terms of allowing the more complex implementations to build on the simpler ones, which is what we'd really be looking for when designing an abstraction. And who knows what you're going to do when you want to start modelling groups that don't have such a linear relationship!

That's why term "subclassing"/"subtyping" always bugged me: the subtype is NOT a subset (of the supertype)! In fact, it's almost always bigger.

Like, record type { a: Float, b: String, c: Integer } is normally considered a subtype of { a: Float; b: String }, but it's clear that there are 2^32 times more tuples of the first kind than of the second.

And Liskov Substitute Principle, oh my. It looks good, but actually it's pretty much useless: "Let q(x) be a property provable about objects x of type T. Then q(y) should be provable for objects y of type S where S is a subtype of T" is bloody undecidable! See Oleg Kiselyov's "Subtyping vs. Subclassing" (http://okmij.org/ftp/Computation/Subtyping)

One difficulty is that whether X is substitutable for Y depends upon how Y will be used, and I'm unaware of any framework whose type system really expresses that. Instead, both Java and .NET have the rule that a reference of type `foo` is substitutable for a class reference of type `bar` if and only if `foo` inherits or implements `bar`; that is a good rule for reference types that are used to encapsulate identity, but for structs or classes which encapsulate value other forms of substitutability may be more appropriate.

Perhaps what's needed is a means by which an interface could specify that a particular type should be deemed to implement it, using static methods within the interface itself. For example, an `IFraction` interface with members `Numerator` and `Denominator` could specify that any type which implements `INumber` should also be deemed to implement `IFraction` as either :

Number IFraction.Numerator {

get {return ClassHelpers_Numerator(this);} }

Number IFraction.Deniminator {

get {return ClassHelpers_Denominator(this);} }

or

Number IFraction.Numerator {

get {return StructHelpers_Numerator(ref this);} }

Number IFraction.Deniminator {

get {return ClassHelpers_Denominator(this);} }

where ClassHelpers_* and StructHelpers_* were static methods nested within the interface.

Such an approach would allow a Number to be substitutable for an IFraction, even though nothing in the definition for Number itself knew about that type.

I completely agree; an integer is not a special kind of natural.

Well, one can implement integers as pairs of naturlas modulo the equivalence realtion (a, b) ~ (c, d) a+d = c+b.

Another choice is that an integer is a natural with a sign bit. That works quite well for Church's numerals:

...

2 = \m. \s. \z. s (s z)

1 = \m. \s. \z. s z

0 = \m. \s. \z. z

-1 = \m. \s. \z. m (s z)

-2 = \m. \s. \z. m (s (s z))

The implementation I think I'd like best for arbitrary-length integers stored as bits would be to have one special instance which represents an infinite string of zeroes, and another which represents an infinite string of ones. Adding one to the infinite string of ones yields an infinite string of zeroes; adding two infinite strings of ones together yields a zero followed by an infinite string of ones. I find it cute that the formula for computing the value of a power series 1+2+4+8+16... yields -1--a result which is also consistent with the recurrence relation that concatenating a "1" to the LSB end of a bit string should arithmetically double it and add one. Since concatenating a "1" to the LSB end of an infinite string of "1"'s yields that same infinite string of "1"'s, that implies that such a string must satisfy n=2n+1, and the only n for which that works is n=-1.

Wait, doesn't it actually gives you 2-adic numbers? I've never touched those numbers seriously, so I may be very, very wrong, but it rings the same bell.

Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1456

My solution to roughly the same problem in Haskell for anyone who's interested;

import Prelude hiding (pred,and,or,not)

data PR = Z

| S

| P Int

| C PR [PR]

| PR PR PR

deriving Show

eval :: PR -> [Integer] - Integer

eval Z _ = 0

eval S [x] = x+1

eval (P n) xs = nth n xs

eval (C f gs) xs = eval f (map (\g -> eval g xs) gs)

eval (PR g h) (0:xs) = eval g xs

eval (PR g h) (x:xs) = eval h ((x-1) : eval (PR g h) ((x-1):xs) : xs)

nth _ [] = error "nth nil"

nth 0 _ = error "nth index"

nth 1 (x:_) = x

nth (n) (_:xs) = nth (n-1) xs

one = C S [Z]

plus = PR (P 1) (C S [P 2])

pred = PR Z (P 1)

modus = C modus' [P 2, P 1]

modus' = PR P 1 (C pred [P 2])

Actually this is quite a bit different because it doesn't define it's own types and isn't working with binary representations but the recursive logic is similar.

I'll admit, even though I've written a very similar Haskell program, it took me a while to figure out what then heck this program does. Cliff Notes:

*PR* is a datatype for Primitive Recursive functions (http://en.wikipedia.org/wiki/Primitive_recursive_function). Primitive recursive functions are n-ary functions from *n* natural numbers to a single natural number. The *eval* function evaluates the primitive recursive function with its list of *n* arguments.

There are five constructors for the Primitive Recursive functions.

* Z constructs the zero function, which returns zero regardless of input

* S constructs the successor function, which increments its 1 input argument

* P constructs a projection function, which selects the nth argument

* C constructs a function by (n-ary) composition

* PR constructs a function with "primitive recursion"

Using these 5 constructors, you can create any computationally tractable computable function (and quite a few computationally intractable functions too).

I'll just take you back to the past. Something called vbsomething (if you can remember less esoteric things than what you do now).

http://www.visualbasicscript.com/fb.ashx?m=103121

I add scripting to everything I do. It's the easiest thing to do (thanks). I didn't realise everyone thought it was hard to run a script in an exe.