Long division, part two

This is a sequel to my 2009 post about division of long integers.

I am occasionally asked why this code produces a bizarre error message:

Console.WriteLine(Math.Round(i / 6000000000, 5));

Where i is an integer.

The error is:

The call is ambiguous between the following methods: 
'System.Math.Round(double, int)' and 'System.Math.Round(decimal, int)'

Um, what the heck?

What is going on here is: division of an int by a long converts the int to a long and produces a long as its result. That’s the fundamental problem here: the user wants a fraction rounded to five decimal places, but what they are getting is not even a fraction; integer arithmetic rounds automatically. So this code is just wrong.

Unfortunately the compiler is allowing the bogus division to pass without warning, and then informing the developer that there is no overload of Round that takes a long. There are applicable overloads that take double and decimal but there is no basis upon which to deduce which one the developer wants, so that’s an error.

The real danger here is that the developer will attempt to make the fix by attacking the stated error, rather than attacking the actual conceptual error. I have seen this “fix”:

Console.WriteLine(Math.Round((double)(i / 6000000000), 5));

Again, that does the integer arithmetic rounding first, and then converts the result to double, which will already be rounded off to a whole number, not to five decimal places.

The right way to do this is to not use integers:

Console.WriteLine(Math.Round(i / 6000000000.0, 5));  // double
Console.WriteLine(Math.Round(i / 6000000000.0m, 5)); // decimal

We see this sort of error so often that we’ve actually made a checker for it in the Coverity code quality analyzer; this turns up more frequently than you might think in real-world code.

10 thoughts on “Long division, part two

      • What would you think of the idea of regarding the value yielded by dividing one `Int32` by any integer type as being a special type which was implicitly convertible to `Int32` or `Int64`, but could not, without a warning, be implicitly *or explicitly* converted to any other type without being converted to `Int32` or `Int64` first? Such a design would allow “double d = (int32)(int1/int2);” or “int int3=int1/int2;”, but would squawk at “double d = int1/int2;”, implying that it should probably be converted to one of the other forms but the compiler can’t be 100% certain which one.

        • I am increasingly of the opinion that all implicit conversions on numeric types are more trouble than they are worth. There are types that follow our mathematical intuitions and there are types that are fast/small; the default should be the accurate ones, and we can go to the fast/small ones in the physics engines or interop code as needed. This goal to try and make as much as possible implicit just seems to make for problems.

          Once again it comes back to my obsession with mechanisms vs policies. Code that works in the business domain should not have mechanistic things like “how many bits in this integer” intrude upon it; hide those details on the insides of mechanisms, or expose them only in the cases where the actual details of the mechanism have a crucial and compelling purpose in attaining a performance goal.

          • Integers, wrapping algebraic rings, strict-precision floats, and real values all represent distinct domains, and I think any good language should distinguish them. When computing hashcodes using repeated multiplications and additions, the fact that intermediate computations are performed on 32-bit integers isn’t a “speed” thing–it’s a semantic part of the algorithm. Likewise when the encoder-side of a CODEC performs single-precision floating-point calculations, it’s imperative that the decoder side perform those same calculations in bit-identical fashion.

            When dealing with integers or real values, however, I would favor an abstraction which would say that performing calculations with more precision is at least as good as performing them with less, in cases where it isn’t any slower, provided that a type exists which is guaranteed to hold intermediate values with their actual precision. I would also favor a means by which code could say “within this context, I want all calculations involving integers to be promoted to at least __ type”. In many cases, when evaluating an expression like `i1=i2*i3/i4;` on 32-bit variables, the cost of promoting the intermediate computations to `int64` may be no worse than performing overflow trapping on it (in some cases, as above, it might actually be cheaper).

            I don’t know if you’ve been paying any attention to the evolution of C, but I think there’s a war there between compilers and programmers as to what various forms of “undefined behavior” [i.e. things which the standard authors said implementors could handle as they saw fit] should mean. Compiler writers want the freedom to use Undefined Behavior to enable optimizations, but I find it ironic that there’s no apparent effort to let programmers better specify what their code really needs. I would think a better type system could allow far more useful optimizations than are enabled letting compilers turn:

            uint16_t foo(uint16_t x)
            { return x*x; }

            void maybe_launch_missiles(void)
            { if (should_launch_missiles()) launch_missiles(); }

            uint16_t boz(void)
            { maybe_launch_missiles(); return foo(0xF000); }

            into

            uint16_t boz(void)
            { should_launch_missiles(); launch_missiles(); }

            While I would agree that the above implementation would comply with the standard, I would suggest that says more about the insufficiency of the standard than the goodness of the compiler.

            Whether one thinks integer overflow should be checked or unchecked, I hope you’ll agree that either option should be considered preferable to the hyper-modern C approach.

        • I think it’d be even easier to do what the VB family of languages (and I assume others?) did, and have one division operator for floating-point division, and a different operator for integer division. (Not that you could really do something like that in C# at this point.)

          • Pascal used `div` for integer division, and had `/` convert both operands to `real` (which for most Pascal compilers had sufficient precision to accommodate any integer losslessly). I think that’s the right approach for new languages to take, but C# of course needs to be able to run with existing code; I would think allowing an `Int32` quotient to be converted silently to `Int32` would allow most existing code to behave as intended without nuisance warnings, and one can reasonably argue that code that would convert an `Int32` quotient to any other type, implicitly or explicitly, should make clear that what’s being converted is in fact an `Int32` quotient.

  1. Pingback: The Morning Brew - Chris Alcock » The Morning Brew #1843

  2. Pingback: Dew Drop – April 21, 2015 (#1997) | Morning Dew

Leave a comment