Last time I said that I was going to choose the "sequence of bits" technique for representing a natural number. At this point I don't think I need to go into detail about how bits work; you all know that each position represents the addition of a power of two multiplied by the value of the bit at that position, blah blah blah. Let's instead concentrate on the technical details. I'm going to represent natural numbers using the following recursive definition:

- Zero is a natural number
- A zero or one bit, called the least significant bit, followed by a natural number
*that ends in Zero*, is a natural number. However a natural number never**ends**in a zero bit followed by zero.

UPDATE: I have slightly changed the above definition due to an excellent suggestion from reader "Robert", in the comments below. Robert is absolutely right; making naturals be a linked list of bits ending in a sentinel that means zero is a better way to go than my original solution in which the sentinel is a one bit with a null tail. It makes the coming algorithms need fewer cases. Thanks Robert! Resuming our original post...

"Followed by" is a bit counterintuitive here. In our left-to-right reading culture we normally think of the stuff that "follows" as being to the right, but we normally also write the least significant digit to the right. When I'm writing out a sequence of bits in this series I'll use the standard "least significant digit on the right" convention and you can just remember that the tail of the linked list is logically to the left, not the right. I'll try to always write the tail variable to the left of the head variable so that it is more clear in the code as well.

So anyways, basically what we have here is a linked list of bits going from least to most significant that always ends in the Zero sentinel, but never in zero-bit-then-Zero.

We could use `bool `

as the bit, and even define `bool `

constants `ZeroBit `

and `OneBit`

, but hey, we're doing this from scratch, right? *Let's just make objects.* Like I said before, this series is pedagogical, not pragmatic. We're not trying to save on space here and in fact are consuming over a hundred bits of memory per actual bit represented. Of course we will only create two actual bit objects; every zero bit is identical to every other, and the same for one. But the reference to a bit object in each link in the linked list is pretty large.^{1}

Normally I would want numbers to be value types since they are logically values. But it is hard to make a linked list of value types, so we'll go with a class.

Enough chatting, let's write some code.

namespace ImmutableNumbers { public sealed class Natural { private sealed class Bit { public override string ToString() { return this == Natural.ZeroBit ? "0" : "1"; } } private static readonly Bit ZeroBit = new Bit(); private static readonly Bit OneBit = new Bit(); public static readonly Natural Zero = new Natural(null, ZeroBit); public static readonly Natural One = new Natural(Zero, OneBit); private readonly Natural tail; private readonly Bit head; private Natural(Natural tail, Bit head) { this.head = head; this.tail = tail; } private static Natural Create(Natural tail, Bit head) { if (ReferenceEquals(tail, Zero)) return head == ZeroBit ? Zero : One; else return new Natural(tail, head); } } }

OK, let's review. We've got a linked list of bits. We have a private factory method that maintains two nice invariants: first, it ensures that any `Natural `

that is logically one is reference equal to `One`

. Second, by ensuring the same invariant for `Zero `

it also ensures that no non-zero `Natural `

ever ends in a zero bit followed by Zero. Put another way, a Zero `tail`

is equivalent to an infinite sequence of zero bits, and this is the *only *way to represent an infinite string of zero bits.

A few more interesting things about this code:

- I added a
`ToString`

method to`Bit`

solely for debugging purposes. This will be one of the few uses of a built-in type other than`object`

and`bool`

. - I'm going to be adding overloaded equality operators to
`Natural`

soon, so to make it crystal clear when reference equality is intended, I'm going to spell it out every time. (But not on bits; bit equality is always reference equality.) - The recursive math algorithms break down nicely into mutually exclusive cases. I'm going to use the
`if(...) return ... else if(...) return ... else return ...;`

pattern throughout. - The call sites above are the only ones where the constructor will be invoked. From within the class, the factory method will always used. From outside the class, we'll expose
`Zero`

and`One`

; once we have addition, that's all that is necessary to make a number of any size. - Aside from its use as the tail of
`Zero`

, a null`Natural`

is always an error.^{2} `Natural`

is sealed. I have no by-design scenario in which a user will extend this type, and I am not planning to get myself into the logical knot of "an integer is a special kind of natural" or vice versa.^{3}

There's a saying in computer science that the only numbers that matter are zero, one and "arbitrarily many". We've got the first two. **Next time on FAIC:** we'll get the third by implementing addition.

- It is interesting to consider other ways to solve this problem. For example, we could have made two derived classes,
`OneBitNatural`

and`ZeroBitNatural`

, and then used the runtime type of the object to determine the value of the least significant bit. That would get us down to probably less than 100 bits per bit represented, but it still pales in comparison to the density of a byte array. ↩ - I considered making
`Zero`

equal to its own tail for fun but decided not to. Too much code for no purpose. ↩ - An integer can have a value that is not the value of any natural, so an integer class cannot be derived from the natural class. One could build a type hierarchy in which an immutable natural is a special kind of integer without violating the Liskov Substitution Principle, but the whole point here is that I want naturals to be foundational. ↩

I'm surprised that you use bits at all, and didn't decide to go with a linked list, where the length of the list is the number being represented. That would have allowed for a direct implementation of the Peano axioms. You couldn't get more "from scratch" than that.

And how would you represent that length without using a built-in number type?

The length of the sequence would *be* the number. For example, the number 0 would be null, 1 would be an item whose `next` reference was null, 2 would be an item whose `next` reference pointed to something whose `next` reference was null, etc.

You could pre-generate all the linked lists you needed, store them in an array, and refer to them by index. They would share a lot of structure so it wouldn't take too much memory. In fact you don't need to store the array as it's easy to generate the linked list from its index. I wonder if there's any way to do the arithmetic just using the indices without manipulating the structures....oh.

As I noted, I considered it. I said that I was going for a non-pragmatic approach, but I still want to be able to do arithmetic in a reasonable amount of time on large-ish numbers. With a linked list of bits I can pragmatically do operations with hundreds of bits; with unary arithmetic I can pragmatically do operations on numbers up to the hundreds.

Interesting. I expected you to do two things differently:

(1) Use an object, not null, for the end-of-list sentinel.

(2) Represent the natural 0 with the raw end-of-list sentinel (and no 0 bits)

This isn't to say one style is better than another -- just that I assumed I could anticipate Eric Lippert Style™ better than I apparently do.

I considered using a null object pattern here, though I did not consider your second idea.

Now that you mention it I am considering again using the null object pattern. It would make some of the problems I will have with the equality operators work out more nicely. Thanks!

Personally, I like the idea of having zero have itself as a tail; if one wants to avoid the danger of endless recursion, one could have a TailOfNonZero property which would either return the tail of a non-zero number or throw an exception if the number was zero. If the Zero object were of own class, virtual method dispatch could eliminate the need for an "if" test. Alternatively, one could have Zero.Tail throw an exception but have a TailOrZero property which keeps returning Zero.

Using virtual dispatch, you could have the addition of something to Zero yield that number. Addition of something (even Zero) to a non-zero number could be done without worrying about whether the thing being added is Zero, since the Zero can perfectly happily act as though it has as many padding zero bits as required.

Also, what would you think of having lazy-initialized fields for withPrependedZero and withPrependedOne, so that for every numeric value there would be exactly one instance? Making those fields long weak references would show a nice usage case for them (as long as a reference exists to a number (including references to any number that has it as a tail) it needs to be kept to ensure that creating another number with the same numeric value will yield the same instance? If no reference to a Number exists anywhere in the universe, it will be impossible to tell whether the next request yields the same reference or a new one, so there's no reason to keep the old one.

That's a good suggestion. I would be more inclined to build a more general-purpose memoizer though that could be re-used to make other operations faster as well.

I notice your Bit is conceptually an enum, but written closer to what the Java compiler emits from the enum keyword than to a normal C# enum.

A Java enum, for those who don't know, is a sealed class with a private constructor and a bunch of constant values. They have no relation to ints, each enumerated constant is a unique reference rather than a constant integral value. Since the class is sealed with a private constructor, it's not easily possible to create new ones elsewhere in the program, or to ever get a reference of the type to refer to anything apart from one of the constants.

I applaud. Java enums are the one true way, and now that I'm writing C# I continue to write them. You can add methods or fields to them, like the ToString() here, so they are far richer than C# enums. You can't use them for bit flags though.

I like Java enums conceptually as well. C# was designed with interoperability with COM in mind, and in COM enums are just named integers.

The problem with enums is that we use them to solve too many problems; we use them both to represent a large collection of disjoint objects and a small collection of Boolean flags, and end up with this sort of platypus. It would have been nice if C# 1 had made a stronger distinction between these two usage cases, and provided primitives for things like checking a flag, setting a flag and so on, that didn't involve bit arithmetic explicitly.

The ability to have a single word encode up to 32 flags in semi-symbolic is very useful, and is among the ways I consider the .NET type system superior to that of Java in which every storage location is either a primitive or a promiscuous object reference. Still, enums as you note fall in a curious murky middle ground between numbers and abstract values. Do you know what basis there is for the prohibition against `enum` types defining methods or implementing interfaces? Obviously they can't have fields, and code which expects all enumerated types to descend *directly* from `System.Enum` would break if some did not, but what would happen if e.g. an type derived from `Enum` implemented its own override of `ToString()`? Would the JIT melt down into a pile of jelly?

More generally, what would you think of the concept of having a primitive storage location types be declarable using different compile-time names with *different semantics*, somewhat like what is done with `dynamic` versus `object`? Many of the arguments over things like implicit conversions center around the fact that types like `System.Single` have a variety of different usage cases, and rules which are set up to favor one case will often be hostile toward another. The Framework and CLR wouldn't have to make such distinctions; they could be confined almost entirely to language compilers, save only for the use of attributes on method parameters (analogous to the way C# distinguishes `ref` and `out` parameters).

As an example, I'd suggest `strict float`, `auto float`, and `fast float` as synonyms for int32. Given a line `double d1=x*x;`, a `strict float` would squawk at the implicit conversion; if a cast to double was added, the compiler would precede it with an instruction to force rounding to `float`. An `auto float` would eagerly convert to `double` and accept implicit conversions from `double` in many cases [when evaluating a statement like `f1=f2+f3+f4;`, computing with `double` and then rounding the result may often be more precise than computing with `float`]. A `fast float` would allow the same operations as `auto float`, but would neither force the operands to be converted to `double` nor force the result to be rounded to the nearest `float` value before being "converted" to `double`, thus allowing the JITter to do implement whichever behavior would be faster in any given circumstance.

For each type, there would be usage cases where its semantics would be much more convenient and less error-prone than the others. Instead of having language rules try to serve as a compromise among some very different use cases, simply providing a means via which the programmer can indicate the desired usage case would make it possible for the compiler to serve all three cases better than the existing rules. (I'd probably find `auto float` most useful myself, but other programmers would legitimately find other types more useful).

"enums as you note fall in a curious murky middle ground between numbers and abstract values"

Well, not really. Flags are just abstract values too, with pre-specified "union"/"intersection"/"is present" operations and marshaling.

"Java in which every storage location is either a primitive or a promiscuous object reference."

I guess you might not have seen EnumSet in Java then. It's quite compact and efficient. Look at the docs:

http://docs.oracle.com/javase/7/docs/api/java/util/EnumSet.html

Java does have a special class EnumSet which gives you the power to express the "collection of boolean flags" idea nicely. Only problem is the horrible syntax for those, but C# wouldn't have that problem thanks to operator overloading.

Throw in a conversion from EnumSet to int and implement the necessary operators on enums to return an EnumSet of the given enum and you'd have the imo perfect solution to this problem. Still very sad that C# 1.0 got this one wrong (although without generics I don't think such an implementation would've been possible, so too bad we don't have a time machine).

PS: Do you think it's possible that the C# team would add a more java-like enumtype in the future? (Now that you're not working at MS anymore this is obviously even more speculation than before, but hey^^) Naming would obviously be pretty hard though, but still a valuable concept and you seem to agree.

That's a great idea, IMHO.

Am I missing something or is the there a typo in the following lines:

private static readonly Bit ZeroBit = new Bit();

private static readonly Bit OneBit = new Bit();

ZeroBit and OneBit are exactly the same?

Wait, now I see. The key is this line:

return this == Natural.ZeroBit ? "0" : "1";

Since this is meant to be an immutable type with the head and tail assigned by the constructor, is there any particular reason that head and tail aren't also marked as readonly?

Good catch, I meant to do that. Thanks!

Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1446

This is very close to having an Ouroboros. For one, it's recursive. For two, "Natural tail, Bit head" - if you changed them to be, "Natural head, Bit tail", you'd have it. It's still really clever for all that. Thanks for the laugh.

As defined, even though your Natural.ZeroBit object has no mutable characteristics, it possesses one of the most troublesome traits of mutable classes: a storage location of type Bit encapsulates *identity*. Deriving ZeroBit and OneBit classes from Bit (each of which could, if desired, expose a singleton instance as an optional convenience) would allow the value of a bit to be encapsulated via its *type*. Among other things, this would allow Number types to be made serializable merely by adding [Serializable] tags to the appropriate types. Otherwise, both serialization and deserialization would require special logic since no "normal" serializer is going to recognize any significance of Bit.ZeroBit.

Well, you'd also need a public parameterless constructor. It strikes me as a bit ugly for a serializer to work by constructing a zero and mutating it into a one.

How about having Number be an abstract type, with concrete types NumberZero, NumberOne, NumberWithLsbZero, and NumberWithLsbOne? Each type could override a factory method PrefixWith(bool)? Whether a value was zero, a one, or something else would be discernible from its type without need for reference identity.

Why not just use the integers 0 and 1 to represent 0 and 1? Provided you only manipulate them via a set of operations guaranteed to produce results in the set {0, 1}, you preserve representational integrity in your implementation. Moreover, you could take advantage of this representation; for example, bit addition can be implemented via a switch statement or array lookup over carryIn + X + Y (this can take on only four values {0, 1, 2, 3}).

Well, if you're going to go ahead and use integers, why not just go ahead and use integers? I hear C# supports a variety of operations on integers out of the box.

Ah, flippancy, almost the lowest form of internet response. The brief is to find a representation of bits convenient for doing bit-stream arithmetic. I was hoping for some constructive discussion on the relative merits of my suggestion.

The title of the series is "Math from Scratch". Eric specifically said that he was going to restrict his use of predefined classes. int is the last class he's going to allow himself to use, unless maybe double.

There's a lot worse on the internet than flippancy. For instance, there is bluntness: I think your suggestion is a non-starter.

Yes, the title of the series is "Math from Scratch", yet Eric is not starting with Peano's axioms or set construction or any other more fundamental scheme.

You understand that I'm suggesting only that he use 0 and 1 to stand for 0 and 1? My understanding is that Eric's goal is to build up arithmetic from operations over (bounded numbers of) binary digits; what virtue do you imagine he sees in choosing an unnecessarily awkward representation here?

Actually incoherent profanity is the lowest form of response, then name calling. Flippancy is actually pretty high on the list, unfortunately.

We certainly could use the integers 0 and 1 to represent the bits 0 and 1, but then what do you do with them? If you ever add, subtract, multiply or divide them then that gives the lie to the whole premise of the series. The only thing I'd want to use them for is testing their identity. By choosing objects that have no built-in semantics whatsoever I'm emphasizing that a zero bit could logically be anything at all.

My thinking is this: you are trying to build up arbitrary precision arithmetic from "fundamental" finite functions. For example, bit addition (x, y, carryIn) -> (z, carryOut) is a mapping from bool^3 to bool^2. That you might use integer arithmetic as an incidental implementation detail of this function (e.g., as a table look-up) in no way implies that integer arithmetic is necessary for the implementation. To put it another way, if your building blocks are just finite maps, who cares what implementation you choose? Might as well pick a convenient one.

I personally think that concern trolling is the lowest form of internet response, because it's the easiest to mistake for a real response and so you waste more time responding to it.

in your second bullet point I think the final zero should be Zero, no?

Why don't you use null to represent zero externally? Internally (as the tail element) it does specify an infinite stream of zeros - the number zero - so why not externally?

I will note that this simplifies the class invariant - from (⊳ is well-founded, a ⊳* b ∧ b.tail == null → b.head == One where a ⊳ b ↔ a.tail == b) to (⊳ is well-founded, a.tail == null → a.head == One where a ⊳ b ↔ a.tail == b)

Also there is quite a bit of precedent for this decision - not only C with NULL=0=false but also the standard von Neumann integers have the empty set - which is quite similar to null - as 0.

Because you want to be able to call ToString on zero without crashing.

Seems like I'm too used to Python (where str(None) does work correctly)

What about having the public type be a struct which contains a single field `InternalNumber num;` (a class type)? The struct's `ToString` could then return "0" if `InternalNumber` is null and return `num.ToString()` otherwise. Such an approach would allow `Number` to behave with value semantics (including having a default value of zero rather than null) despite the variable size of its backing data.

I considered such an approach. It's a good one because it hides the nullity of the type completely, at the cost of a slightly more complex implementation. I decided though that it would be more pedagogically interesting to keep this a reference type. Were I actually making a production-quality big integer library I'd do it that way.

I wish .NET languages had a concise way of expressing the idea that a value type which encapsulates a storage location of another type is supposed to behave like that other type. If the encapsulated storage location were required to be a concrete type, a compiler could implement this feature (without needing any run-time support) by auto-generating wrapper members for each member of the encapsulated type that aren't defined by the struct itself.

The biggest problem I see is that it wouldn't be possible to unbox the wrapped type as the wrapper type. I wonder what the implications would be of having "box" and "unbox" instructions call virtual methods (defined in ValueType): `Object VirtBox()` and `void ControlledBoxer.VirtUnboxFrom(Object)`? That would allow Number to define its methods so an empty Number would box as an empty Number, but a non-empty number would box as the encapsulated NumberClass (rather than null). Unboxing could accept either a Number or a NumberClass.

Pingback: Math from scratch in Haskell: zero and one | Loominate