Now we come to the controversial one. As I pointed out back in 2011, the `%`

operator is the *remainder *operator. When extending the division and remainder operators from naturals to integers, I require that the following conditions be true for dividend `x`

, non-zero divisor `y`

, quotient `q`

and remainder `r`

:

`x == y * q + r`

- if
`x`

and`y`

are non-negative then`0 <= r < y`

`(-x) / y == x / (-y) == -(x / y)`

Let’s take a look at the consequences of these three facts.

First off, when the sign of an operand changes then the quotient changes sign but not magnitude.

`11 / 7 == 1`

, so `-11 / 7 == -1`

and `11 / -7 == -1`

and `-11 / -7 == 1`

.

I hope this is reasonable; it would be very strange if `-11 / 7`

and `-(11 / 7)`

had different values. [1. Well, perhaps not *so *strange. Suppose unary minus means “multiply by -1”. It is already the case in integer arithmetic that `(n*x)/y`

and `n*(x/y)`

can be very different, so maybe it is reasonable that `((-1)*x)/y`

not equal `(-1)*(x/y)`

. Suppose unary minus means “subtract from zero”. Then we have `(-x)/y == (0-x)/y`

. The distributive property doesn’t hold for integer arithmetic: `(n-x)/y`

need not equal `(n/y)-(x/y)`

, so there is no reason to suppose that `(0-x)/y == (0/y)-(x/y) == -(x/y)`

.]

Now that we know the quotients, the remainders follow from the first identity.

`11 % 7 == 4`

, so `-11 % 7 == -4`

and `11 % -7 == 4`

and `-11 % -7 == -4`

.

If you don’t like that — if you’d rather `-11 % 7`

be `3`

— then one of three things has to slip. Either the remainder operator doesn’t actually return the remainder, or the remainder is defined by some equation other than “what’s left over when you multiply the quotient by the divisor and subtract from the dividend”, or there are scenarios where the quotient is computed such that `(-x) / y != -(x / y)`

. All of those are unacceptable to me, so the rules we have are that quotients and remainders are both determined by doing the computation in the naturals, and then assigning the sign accordingly:

public static Integer operator /(Integer x, Integer y) { if (x.IsDefault) x = Zero; if (y.IsDefault) y = Zero; if (y == Zero) throw new DivideByZeroException(); else return Create(x.sign == y.sign ? Positive : Negative, x.magnitude / y.magnitude); } public static Integer operator %(Integer x, Integer y) { if (x.IsDefault) x = Zero; if (y.IsDefault) y = Zero; if (y == Zero) throw new DivideByZeroException(); else return Create(x.sign, x.magnitude % y.magnitude); }

And we’re done with the integers! **Next time on FAIC:** we’ll do some 2300 year old mathematics and then solve an actual programming problem using our big integer library.

“If you don’t like that — if you’d rather -11 % 7 be 3” should be “If you don’t like that — if you’d rather -11 % 7 be 4”

No, it’s not a typo. The point Eric is making is that remainder and modulus are not the same: -11 is congruent to 3 modulo 7, but the remainder is -4. The way to think of modular arithmetic is like a cycle: the congruence class for 3 is 3, 3+7 = 10, 3+7+7 = 17, 24, 31, etc. So, if you go backwards, it is also congruent to 3-7 = -4 (the remainder), 3-7-7 = -11, -18, -25, etc. Some people think you can use the remainder operation to get a positive modulus (3), but that’s not how it works for negative numbers.

“it would be very strange if -11 / 7 and -(11 / 7) had different values”

As you already mention in your footnote this can happen anyhow and no I wouldn’t consider it strange.

For most practical purposes computing the modulo (remainder? Lots of people call % in java/C# modulo and I’ve seen both terms used interchangeably in the literature?) if we assume it’s always positive:

Array indexing, time computations (get a POSIX timestamp and compute time of day), applications in computer graphics, etc.

Pretty much the only reason I can see why picking “remainder” instead of “mod” for % is because hardware generally implemented one but not the other (well x86 does, Aarch64 doesn’t have a mod, no idea what the rest of the flock does)

Well too late to complain about that now.

Agreed. Integer division has an implied rounding. So I wouldn’t expect that the following *necessarily* be true: round(-11 / 7) == -round(11 / 7).

Because there are many different ways to round numbers. http://en.wikipedia.org/wiki/Rounding

The IDIV instruction on x86 works by converting the operands to positive values and then setting the sign of the results, but the IDIV instruction is seldom the fastest way to divide a number by a constant. Other ways of dividing by a constant are faster using round-toward-negative-infinity semantics. Division by powers of two is an extreme example (a single-instruction shift will achieve round-toward-negative-infinity division), but the same is true of other constants as well. For example, given a signed value in EDX, if EAX is available as scratchpad, one may divide EDX by 15 via:

mov eax,11111111h

imul edx

add edx,08888888h

If one is trying to compute a rounded quotient, a natural way to write the expression using round-toward-negative-infinity semantics would be (n+7)/15; such rounding is sufficiently common that a compiler could fold the addition into the above sequence:

mov eax,11111111h

imul edx

add edx,7FFFFFFFh

For a compiler to produce truncate-toward-zero division, it would have to use a more complicated method of rounding by 15 (slowing things down even in cases where the operand was positive).

When is that third property actually useful?

I would have preferred the property that (x + y)/y = x/y + 1, which is the closest thing you can get to a distributive property. It’s nice from a practical perspective because the remainder equals the modulus, so `bucketIndex = item.GetHashCode() % bucketCount` works as expected. It’s also nice because every quotient has the same number of dividends map to it, so `tileIndex = xCoordinate / tileSize` works as expected instead of creating a group at zero with twice as many items.

I understand that it’s too late to change the C# definition of division, and I understand that it’s easier to implement the current definition in hardware, but you seem to be arguing that the current definition is actually better, so I’d like to understand the circumstances where you would take advantage of the (-x)/y = -(x/y) property.

Hear hear! I loathe the definition of signed division which truncates toward zero, and am hard-pressed to think of any cases where it’s actually useful. Although from a hardware perspective it is often easiest to implement signed division by converting both operands to positive numbers, performing the division, and then correcting the sign of the results, and this often makes it slightly cheaper to compute the truncate-toward-zero remainder than the modulus, there are times when uniformly-periodic division [such that (a+b)/b==(a/b)+1] may be much faster than truncate-toward-zero. For example, if D is a compile-time constant that happens to be a power of two (e.g. 8), the periodic quotient floor(n/D) could be computed as (n >> 3). Fast and easy. The way the new C standard defines /, it becomes something like [assuming >> of a signed-value generates a sign extension] ((n + ((n >> 31) & 7)) >> 3). Ick. The remainder function is even ickier.

I understand that because code which does something like “roundedResult = ( value + ((value >” is okay if the programmer knows scaleFac will ALWAYS be a power of two, but a “div” operator could optimize if it is now, even if it might change in future).

Personally, if I were designing a language, I would follow Pascal’s lead and have the “/” operator always perform a floating-point division [of type “double” unless the result was immediately used as a “float”]. For integer division, modulus, and remainder, I would have separate div (truncate toward negative infinity), zdiv (truncate toward zero, in case anyone wants it for some reason), qdiv (do whatever’s fastest), along with mod (true modulus), rem (remainder, in case anyone wants it), and multOf (equivalent to “a mod b == 0” or “a rem b == 0”, whichever is faster). Doing that would make it clear when programmers cared about the distinctions and when they didn’t.

There’s a slightly typo when you explain the remainders. The article says:

11 % 7 == 4, so -11 % 7 == -4 and 11 / -7 == 4 and -11 / -7 == -4.

Instead of:

11 % 7 == 4, so -11 % 7 == -4 and 11 % -7 == 4 and -11 % -7 == -4.

Another great article as always!

It’s your footnote that forces me to be such a Platonist about mathematics.

Maths consists of Platonic forms, which we apprehend imperfectly, and computer hardware apprehends even more imperfectly.

If computer integer arithmetic is not an imperfect shadow of a deeper and cleaner reality, it would drive me nuts.

I mean, more so than computers do anyway.

If division were regarded not as a single-valued function, but as a mapping from a pair of numbers to an equivalence set, that would I think simplify things greatly. For a given denominator d, one could define an equivalence relation (q,r)==(q+i,r-di) for all integer values of q, r, and i. The result of dividing n by d would be the set of things equivalent to (0,n). Typically, one would want to select one of the possible pairs of values, though which one would be preferred would depend on the application. The key thing to note, however, is that unlike multiplication whose result is a single answer, the result of a division is fundamentally a field of answers, of which one is preferred.

True indeed. Plus, a language where / was defined that way would signal clearly to the programmer, “here be dragons”. I guess I second the idea of / always doing double division, and integer division being supported by library methods.

I think having operators using a number type that is always closed on their operation is cleannest – / returns a real number, or the closest thing the computer supports, sqrt returns a 2-tuple of complex numbers similar to Mathematica.

But then, the real world intervenes – a popular language would have to be usable by people without university mathematics, who like their maths libraries to pay attention to performance.

There’s Plato’s forms and shadows again.

Interesting. I’ve always thought of math (numbers and operations on those numbers) as manmade shadows of a messy reality.

For example, if I say, “Diane has 2 times as much candy as Jack”, then that “2” is a shadow of a very messy reality.

What do I mean by that statement? Do I mean, “Whoa, she has way more candy than him”, or do I mean, “Nah, she doesn’t have that much more candy than him”? Are Jack and Diane adults buying bags of candy, or are they children getting pieces of candy. What kind of candy? Does “having candy” include the candy they already ate? Does it matter if they got their candy by trick-or-treating or by forcing their younger siblings to “share”?

Of course, there’s also the messy detail of counting candy. How many rolls of Smarties for one box of Nerds, or how many Laffy Taffy for a Snickers bar? Or even worse, which is more, Twix or 3 Musketeers?

In that sense, that manmade shadow called “2 times” helps me communicate and reason about the very messy reality of Halloween, one detail at a time. 🙂

I view integers as having precise meaning, and floating-point values as being approximations. There is only one integer value which behaves as an additive identity (zero), and only one value which behaves as a multiplicative identity (one). From there, there’s only one value which represents the sum of the multiplicative identity and itself (i.e. two), one value which represents the sum of “two” and the multiplicative identity (i.e. three), etc. When they don’t overflow, I view integers as representing those nice mathematical concepts.

As for operations being closed on their number types, that can’t quite work with any numeric type which is stored in a fixed amount of space (the behaviors of unsigned integer types are more like those of algebraic rings than those of numbers; e.g. x-y yields the value which, when added to y, will yield x; because 1u+0xFFFFFFFFu==0, 0u-1u==0xFFFFFFFFu). On the other hand, it shouldn’t be hard to define a language such that `longVar = intVar1 + intVar2` would yield a numerically-correct result even if the sum of the `intVar` variables would exceed the range of an `int`.

Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1476

Pingback: Dew Drop – Anniversary Edition – November 1, 2013 (#1,658) | Morning Dew

Actually now that I read the footnote again during the day I see that Eric actually doesn’t address the situation where

“(-x) / y == x / (-y) == -(x / y)”

doesn’t hold even under the current system – which is where the whole “At least it’s consistent” argument falls down!

Trying the above with x = int.MinValue; and y anything but 1 will result in -x / y != -(x / y)

int.MinValue is of course a special case. The specification notes, for example, that dividing the minimum value by -1 is implementation-defined behavior.