We have only two operators left, integer division and remainder. We’ve left the most complicated two operations for last, so let’s tread carefully here.

(Only two? I’m not going to do bit shifting operators; I don’t think of them as operations on natural numbers. If you want them, obviously they are trivial: shifting right is just taking the tail, shifting left is appending a zero bit as the new head. Similarly I am not going to implement the bitwise logical operators. Nor am I going to do the unary plus operator, which wins my award for World’s Most Useless Operator. It is a no-op on both naturals and integers; if you really want it, it’s not hard to write. The unary minus operator makes no sense on naturals; the only legal operand value would be zero.)

The purpose of the division and remainder operators is to take two natural numbers, `x`

and` y`

, and find two natural numbers called the quotient and remainder such that:

x = y * q + r r < y

Clearly if `y`

is zero then there is no remainder smaller than it, so on those grounds alone we can disqualify division by zero. But even if we allowed a zero remainder in the case where `y`

was zero, then we either have infinitely many solutions for `q `

if `x`

is zero, or no solutions for `q`

if `x`

is not zero. Thus division by zero is illegal.

The recursive division algorithm is the trickiest one we’ve seen so far, so let’s derive it carefully. The base case is clear: if `x`

is zero then the quotient and remainder are both zero regardless of what `y`

is. To figure out the recursive case let’s see if we can rewrite `x`

in the form `q * y + r`

for any nonzero `x`

:

x = { xtail : xhead } = 2 * xtail + xhead

Clearly we’re going to have to recurse on `xtail`

somehow, so let’s suppose we’ve done so and have discovered that the quotient `xtailq`

and remainder `xtailr`

of that calculation exist, and obey:

xtail = xtailq * y + xtailr xtailr < y

Ok, substitute that in:

x = { xtail : xhead } = 2 * xtail + xhead = 2 * (xtailq * y + xtailr) + xhead = (2 * xtailq) * y + (2 * xtailr + xhead)

And we’ve done it; we’ve written `x`

in the form of `q * y + r`

.

Except… wait a minute, we haven’t guaranteed that `r < y`

. We know by the recursive hypothesis that `xtailr < y`

, but now we are multiplying it by two and adding either zero or one to it, so it could be larger than `y`

. How large could it possibly be? Clearly the worst case is where `xtailr`

is `y - 1`

and `xhead`

is one. In that worst case, the largest that the proposed remainder can be is `2 * y - 1`

. This means that if the proposed remainder is greater than or equal to `y`

and less or equal to than `2 * y - 1`

then we can subtract `y`

to get a new remainder that is greater than or equal to zero, and less than or equal to `y - 1`

:

x = (2 * xtailq) * y + y + (2 * xtailr + xhead) - y = (2 * xtailq + 1) * y + (2 * xtailr + xhead - y)

And now we can write the code:

private static Tuple<Natural, Natural> DivideWithRemainder(Natural x, Natural y) { Debug.Assert(!ReferenceEquals(y, Zero)); if (ReferenceEquals(x, Zero)) return Tuple.Create(Zero, Zero); Tuple<Natural, Natural> tuple = DivideWithRemainder(x.tail, y); Natural quotient = Create(tuple.Item1, ZeroBit); Natural remainder = Create(tuple.Item2, x.head); if (remainder >= y) { remainder = Subtract(remainder, y); quotient = Add(quotient, One); } return Tuple.Create(quotient, remainder); }

And that’s why we needed comparison operators and subtraction before we could do division.

The entrypoint methods are trivial, as you would expect:

public static Natural operator /(Natural x, Natural y) { if (ReferenceEquals(x, null)) throw new ArgumentNullException("x"); else if (ReferenceEquals(y, null)) throw new ArgumentNullException("y"); else if (Equals(Zero, y)) throw new DivideByZeroException(); else return DivideWithRemainder(x, y).Item1; } public static Natural operator %(Natural x, Natural y) { if (ReferenceEquals(x, null)) throw new ArgumentNullException("x"); else if (ReferenceEquals(y, null)) throw new ArgumentNullException("y"); else if (Equals(Zero, y)) throw new DivideByZeroException(); else return DivideWithRemainder(x, y).Item2; }

We are done with natural numbers; we’ve implemented the binary addition, subtraction, multiplication, division and remainder operators and the unary increment and decrement operators plus an extra bonus exponentiation method. **Next time on FAIC:** Integers are really easy once you have naturals.

I understand the code as long division, but I don’t understand part of the derivation.

I understand

x

= { xtail : xhead }

= 2 * xtail + xhead

= 2 * (xtailq * y + xtailr) + xhead

= (2 * xtailq) * y + (2 * xtailr + xhead)

is

Tuple tuple = DivideWithRemainder(x.tail, y);

Natural quotient = Create(tuple.Item1, ZeroBit);

Natural remainder = Create(tuple.Item2, x.head);

and

x

= (2 * xtailq) * y + y + (2 * xtailr + xhead) – y

= (2 * xtailq + 1) * y + (2 * xtailr + xhead – y)

is

if (remainder >= y)

{

remainder = Subtract(remainder, y);

quotient = Add(quotient, One);

}

but I don’t understand the jump from

x

= (2 * xtailq) * y + (2 * xtailr + xhead)

to

x

= (2 * xtailq) * y + y + (2 * xtailr + xhead) – y

It looks like you added a 0 to x in the form of (+ y – y) and rearranged the expression, but I’m having trouble following the reasoning behind it.

That’s actually the really important step there: Both equations are as you notice equivalent, it’s only a question about how you split the equation into head and tail. Since the whole thing is based on the assumption that xtail < y we have to guarantee that (2 * xtailr + xhead) is smaller than y.

We assume that xtailr < y, but clearly 2*xtailr + xhead does not have to be smaller than y. But from xtailr < y follows that: 2*xtailr + xhead < 2*(y – 1) + xhead <= 2*(y – 1) + 1 = 2*y – 1

Hence if we subtract y from (2 * xtailr + xhead) we guarantee that the assumption xtail < y holds in all cases.

Okay, here’s how I understand it.

x = qy + r

if r >= y then r can be written as y + smallRemainder, so

x = qy + y + smallRemainder

and it can be simplified as

x = (q + 1)y + smallRemainder.

But what makes you so sure that smallRemainder is less than y?

Let’s run the code in base 10 with the division 321 / 4.

At the base case we get q=0 and r=0, so

x = (0)y + 0

then the backtracking steps are shift and check if r >= y, so

x = (00)y + 3

r = y, so add 1 to q and subtract y from r

x = (000 + 1)y + (32 – 4)

x = (001)y + 28

In this case the new remainder is still greater than y.

It looks like in general

x = qy + r

should be converted to

x = qy + cy + smallRemainder

x = (q + c)y + smallRemainder

to get a remainder smaller than y.

I don’t follow the reasoning that c cannot be greater than 1.

You are correct that in base ten long division the remainder produced might be large enough to bump up the quotient by more than one. But in base two it is not.

Remember, the assumption of the recursive step is that the recursive step works correctly and always returns a remainder less than the divisor. We then multiply it by the base. If the base is two then the new remainder cannot possibly be larger than 2 * y, so we only need to subtract from it once. In base ten the new remainder cannot possibly be larger than 10 * y, so we might have to subtract from it up to nine times.

Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1463

Is there any type on which unary plus is not a no-op, other than weird C++ DSLs?

I think in some floating-point implementations, invoking the unary + on a floating-point storage location that contains an invalid bit pattern may trigger a trap. Further, in C#, the unary + operator will upcast any 8- or 16-bit operands to an Int32–a fact which may be observed either with type-specific overloads or with generic parameter inference.

Neither of those explain why it’s overloadable, or for that matter present in the syntax at all.

I think the key observation is that if the remainder from the previous step was less than y, the maximum value is y-1, which means the maximum value for 2r is 2y-2, and the maximum for 2r+1-y is 2y-y-1, i.e. y-1, which is of course less than y.

Also, since Quotient is not used prior to the comparison, it would seem like it would be cleaner to generate it after the comparison, by concatenating either a OneBit or ZeroBit to the previous result (as opposed to concatenating a ZeroBit and then “Add”ing one.

It took me a while to figure out why I should care about 2r+1-y.

It’s just not clear where all the numbers are coming from. Are they coming from the math equation, the way the function is supposed to work, the base we used to implement numbers, or something else? So, it’s difficult to tell where the general, abstract math stuff ends, and where the specific implementation details begin, and how to combine them together to state facts about the algorithm.

==========

Here’s how I made sense of it. Please tell me if I missed something.

x = (2 * xtailq) * y + (2 * xtailr + xhead)

In order to keep track of where numbers are coming from, let’s make that equation a little more descriptive. Since 2 is there because we implemented numbers in base 2,

x = (base * xtailq) * y + (base * xtailr + xhead)

Which means

quotient = base * xtailq

remainder = base * xtailr + xhead

Since the function should always maintain remainder to be less than y (5/1=0 remainder 5 would be weird), the remainder must be converted to quotient, if the remainder gets bigger than y. This is done by counting how many times y can be subtracted from remainder, and adding that count to quotient.

while (remainder >= y)

{

remainder -= y;

quotient += 1;

}

Wouldn’t it be nice to know how many times that loop will run? Maybe. Let’s find out.

The minimum number of times that loop runs is 0. That’s when remainder is less than y.

The maximum number of times that loop runs depends on how big remainder is.

Since

remainder = base * xtailr + xhead

and

base is constant

xhead has values from 0 to base-1

xtailr (if our function works properly) has values from 0 to y-1

then

maxRemainder = base * (y-1) + base-1

maxRemainder = base * y – base + base-1

maxRemainder = base * y – 1

Since our numbers are in base 2

maxRemainder = 2y – 1

How many times can y be subtracted from maxRemainder?

1st subtraction: 2y – 1 – y = y – 1

2nd subtraction: y – 1 – y = -1 (illegal op for naturals)

Looks like we can only subtract y one time from maxRemainder. This means that for base 2, using this algorithm, the loop needs to only run 0 or 1 time. So the loop can be changed to an if statement.

if (remainder >= y)

{

remainder -= y;

quotient += 1;

}

Is this true for other bases? Let’s see.

For base 3

maxRemainder = 3y – 1

1st subtraction: 3y – 1 – y = 2y – 1

2nd subtraction: 2y – 1 – y = y – 1

3rd subtraction: y – 1 – y = -1 (illegal op for naturals)

Base 3 numbers runs the loop in this algorithm 0 to 2 times. The higher the base, the higher the value of maxRemainder, and the higher the number of times the loop can run.

So, no, the other bases need the while loop. The math just happens to work out that if the numbers are in base 2, this algorithm only needs to run the loop 0 or 1 time.

==========

FYI: I updated my running F# sample:

https://github.com/mrange/experiments/blob/master/fun/numerals/elnumerals/Program.fs

If I may reflect a bit over the differences in the F# vs the C# implementation:

(First let me start of that I really like C# as well, I am not trying to start a flame war here)

One could argue that F# gives more succinct code, personally I don’t consider this to be of such great importance (for instance J or CodeGolf is very succinct but I am not I sure I would like to maintain J code. COBOL is non-succinct but is quite maintainable when writing financial applications).

What I really think is great with F# is:

1. No null values. In general FP discourages null values and thus the code needn’t check for this all the time. In order to represent empty values you either have a natural empty value such a empty array or empty string or wrap the value in an option. Using pattern matching one extracts the values in a null safe way.

2. Pattern matching – When I was implementing a red black tree in F# I realized how many errors pattern matching removes. This is a bit hard to explain but in a red-black tree implementation one looks for conflicts by walking down the tree up to two steps. The dilemma in C# is that if the test went down left then right when mutating the tree it’s very easy to go left then left creating hard to find bugs. When pattern matching you only see what you’ve matched thus if you match left then right you can’t by mistake access any other paths. Also it’s awesome that you construct objects with the same syntax as you deconstruct them.

Hopefully someone enjoyed the F# implementation…

If I may reflect a bit over the differences in the F# vs the C# implementation<Breaks beer bottle on bar>Game on!

But seriously, yes, pattern matching is one thing I do love about FP, although I don’t know F#.

Pingback: Math from scratch, part six: comparisons | Fabulous adventures in coding