The integers are an extension to the naturals that closes them over subtraction. Rather than do anything fancy, I’m simply going to say that an integer is a natural with an associated sign, and that zero is always “positive”. We don’t want to end up in a situation where there are two forms of zero, since that is confusing. (We could of course have three signs, positive, negative and zero, but, meh. We don’t gain any great benefits from having a third sign.)

This time however I’m going to make a struct. C# requires that `default(SomeStruct)`

, where all the fields are their default values, be a valid value of the type. The default should logically be zero, as it is with all the built-in number types. So rather than checking for null, as we did with the reference type `Natural`

, I’m going to check the `sign`

field for null, which will only be possible if we’ve got a default struct. We’ll simply replace that with zero. (I am in general opposed to modifying the formal parameters of a method; I prefer to treat them as read-only variables. I’ll make an exception in this case because it keeps the code short and clear.)

We could of course do the same thing for naturals; we could make our natural class a nested private class and make Natural a struct whose sole field was of the private class, and then enforce the invariant that if the field is null, it is considered to be zero. (Or we could make the `Natural `

class a private nested class of the `Integer `

class, and not expose naturals at all.) I decided when I started this series that this would be pedagogically distracting, but then again, so is dealing with nulls, so it’s a bit of a wash really. Were I designing a real-world large integer library I would probably make sure that all the exposed types were structs.

As before, rather than using a bool for the sign, I’ll just make a couple of objects:

public struct Integer : IComparable<Integer>, IEquatable<Integer> { private sealed class Sign { public override string ToString() { return this == Positive ? "+" : "-"; } } private static readonly Sign Positive = new Sign(); private static readonly Sign Negative = new Sign(); public static readonly Integer Zero = new Integer(Positive, Natural.Zero); public static readonly Integer One = new Integer(Positive, Natural.One); private readonly Sign sign; private readonly Natural magnitude; private bool IsDefault { get { return sign == null; } } private Integer(Sign sign, Natural magnitude) { this.sign = sign; this.magnitude = magnitude; } private static Integer Create(Sign sign, Natural magnitude) { if (magnitude == Natural.Zero) return Zero; else return new Integer(sign, magnitude); } public override string ToString() { Integer x = this; if (x.IsDefault) x = Zero; string str = x.magnitude.ToString(); if (x.sign == Negative) str = "-" + str; return str; }

I also want to be able to convert integers to naturals and naturals to integers. Even though we could convert any non-null natural to an integer, I’ll make them both explicit conversions so that it becomes clear when we’re treating a natural as an integer.

public static explicit operator Natural(Integer x) { if (x.IsDefault) x = Zero; if (x.sign == Negative) throw new InvalidCastException(); return x.magnitude; } public static explicit operator Integer(Natural x) { if (ReferenceEquals(x, null)) throw new ArgumentNullException("x"); return Create(Positive, x); }

We’re off to a good start here.

**Next time on FAIC**: we’ll do integer negation, addition, subtraction, increment, decrement, multiplication and power.

Isn’t your Sign class a poster child for Java enums?

Yes. I like the way that Java does enums and wish C# had a similar feature. The C# approach is really there for backwards compatibility with COM-style enums.

One particular reason why I like Java enums is that you can add your own methods to them. My line of thought as I’m writing my code that uses the enum goes like this:

– Oh, I need to do something different for each enum here.

– I can use a switch!

– No! Switches are bad OO because they break encapsulation.

– I’ll put an abstract method on my enum, and implement it polymorphically on each enum instance.

In Java, I am happy at this point. In C#, it comes down to whether the enum is one of mine, in which case I’m sweet because I wrote it as a Java-style enum by hand, or someone else’s, in which case I have to go with the switch anyway, grumbling.

The case where the someone else happened to be Eric has sadly not happened yet.

If it’s someone else’s, you at least do have the ability to write your own extension methods here; it’s fairly useful, and you may yet end up with a switch in it, but at least it’s encapsulated somewhere still and can be used to ensure that that enum type is handled the same way everywhere.

And I know it’s not ideal, but if you wanted to seal up your switch into a base class that allows you to delegate things polymorphically, it wouldn’t be hard to just accept into the extension method said base type (or have it be a method on the base class and accept an enum value into it) and have it delegate to the class that way. It’s certainly ugly, but if you keep the switch contained to one class, you’re not really breaking encapsulation (that’s my opinion, but I’ll assert its truthiness at least).

True, that can be better than just having a switch in completely unrelated code, but it still won’t be visible to anyone who doesn’t include your class. In a big codebase, like the one I happen to be working with, that’s the norm. If someone saw fit to add to the enumeration, they would not know that your now-incomplete switch is a bug waiting to happen.

But it does at least make it evident what you were thinking when people come to maintain your code, which is a definite plus. If the extension method lasts and continues to be useful, it could aid in the campaign to use Java-style enums as a matter of course.

A note on your note on 3 signs; positive, negative and zero. I am sure most readers of your blog are well-aware of this but a floating point actually have a signed zero. IE a float can be +0 or -0.

God yes. IEEE-754 and its signed zeros are one of those really annoying things that serve no purpose but can make life unnecessarily complicated in some situations.

Good thing that Eric’s explicitly making sure that there’s only one valid Zero representation here.

Actually if you ask Kahan (one of the key figures when design the IEEE float) it’s a very important feature of floats in order to simplify the usage of floats. AFAIK it’s to make sure that 1/(1/-Inf) still is -Inf.

I read quite a few of Kahan’s whitepapers and he knows what he is talking about. Kahan even had the foresight when recruited by intel to help out with 8087 in the 1970s that float should be decimal not binary. Even these days there are “modern” programming languages and databases with no native support for decimals thus forcing me to rely on strings to represent monetary amounts.

As EL doesn’t model Inf as a Integer there’s no need for Signed Zero.

I wouldn’t be surprised if the negative zero behavior ended up being a compromise between those who wanted more zero-ish things and those who wanted only one. Semantically, the value of 1/x should depend upon whether x was the result of a calculation which underflowed but was definitely positive, a calculation which underflowed but was definitely negative, or a calculation which simply yielded zero. While it’s in some ways nice to have 1/+underflow and 1/-underflow yield properly-signed infinities rather than NaN, the lack of an unsigned zero makes the numbering system non-symmetrical, and makes it impossible to uphold the identities [X+Y equivalent-to Y+X], [X-Y equivalent-to both X+(-Y) and -(Y-X)], and [X-0 equivalent-to X]. Personally, I think having either one, three, or four zeroes [perhaps distinguishing subtractions which yield zero from a constant “zero” value, with the latter one alone being guaranteed to act as an additive identity in all cases] would have been better than having two, but the IEEE design is what it is.

An amusing alternative representation is “difference variables” where an integer x is represented as the difference of two naturals, p and q — that is, x = p – q. A “normalised difference variable” has at least one of p and q be zero.

Assuming p, q, r, s are naturals, then…

Addition based on summing naturals is trivial. If x = p – q and y = r – s then x + y = (p + r) – (q + s).

Subtraction is similar. If x = p – q and y = r – s then x – y = (p – q) – (r – s) = (p + s) – (r + q).

Normalisation is simple. If x = p – q and n = min(p, q), then (p – n) – (q – n) is normalised.

Multiplication is also straightforward now. If x = p – q and y = r – s then xy = (p – q)(r – s) = (pr + qs) – (ps + qr).

What’s appealing about this representation is you don’t need different cases for different signs.

That’s a pretty neat approach and I did consider it, but ultimately decided that it was a bit too odd. If I get as far as rationals then of course I will have to deal with pairs of naturals.

And the current impllementation with the sign is kinda exactly this, but using only (0,b) and (a,0).

How far are you going to take this? For example, after Integer you could extend it to Rational.

I’m looking forward to the next article!

Derp, now I see your comment “If I get as far as rationals…”.

Personally I am anxious to see EL tackle the problem of representing irrational numbers.

I am definitely not going to get that far!

If the representation was expressive enough to handle in finite space the limit of a convergent infinite series, you could represent all the usual irrationals. That would be cool. And you could do all the basic operators easy enough. The ToString might be the hassle.

I think we could devise irrationals that could not be represented that way. In general, I think you’d need a math library that was intelligent and could understand natural language descriptions of numbers.

Come on Eric, be ambitious!

When your number class can represent Chaitin’s Omega constant, you can be done.

Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1466

Just to minimize the amount of hard-coded strings i would consider to replace within Integer.ToString() the “-” against a x.sign.ToString().

Integers… Boring.

How about representing numbers with continued fractions, for a change? They can be used to compute with arbitrary precision, even with irrational numbers.

Wouldn’t it be easier to have a property for Sign and Magnitude that replaces nulls?

private Sign SafeSign {

return sign ?? Positive;

}

private Natural SafeMagnitude {

return magnitude ?? Natural.Zero;

}

Pingback: Math from scratch, part seven: division and remainder | Fabulous adventures in coding