I was recently asked by a reader why in C#, int
divided by long
has a result of long
, even though it is clear that when an int
is divided by a (nonzero) long
, the result always fits into an int
.
I agree that this is a bit of a head scratcher. After scratching my head for a while, two reasons to not have the proposed behaviour came to mind.
First, why is it even desirable to have the result fit into an int
? You’d be saving merely four bytes of memory and probably cheap stack memory at that. You’re already doing the math in long
s anyway; it would probably be more expensive in time to truncate the result down to int
. Let’s not have a false economy here. Bytes are cheap.
A second and more important reason is illustrated by this case:
long x = whatever; x = 2000000000 + 2000000000 / x;
Suppose x
is one. Should this be two integers, each equal to two billion, added together with unchecked integer arithmetic, resulting in a negative number which is then converted to a long
? Or should this be the long
4000000000 ?
Once you start doing a calculation in long
s, odds are good that it is because you want the range of a long
in the result and in all the calculations that get you to the result.
Pingback: Long division, part two | Fabulous adventures in coding
You missed a third reason (although I would put it as the first reason):
You said in the first paragraph “It is clear that when an int is divided by a (nonzero) long, the result always fits into an int.” That assertion is wrong. An int divided by an int doesn’t always fit into an int. So an int divided by a long doesn’t either (since a long can represent the same numbers)
int dividend = 1 << 31;
long longDivision = dividend / -1L;
int intDivision = dividend / -1;