All right, we have an arbitrary-precision rational arithmetic type now, so we can do arithmetic on fractions with confidence. Remember the problem I set out to explore here was: a double is actually a fraction whose denominator is a large power of two, so fractions which do not have powers of two in their denominators cannot be represented with full fidelity by a double. Now that we have this machinery we can see exactly what the representation error is:

Console.WriteLine(Rational.FromDouble(2.0 / 5.0));

This program fragment produces the output

3602879701896397/9007199254740992

Clearly that is not 2/5, but how close is it?

Console.WriteLine(Rational.FromDouble(2.0 / 5.0) - Rational.FromIntegers(2, 5));

The result is astonishing:

1/45035996273704960

People often point out that doubles introduce representation error but seldom do they quantify that error. In this case the error is less than one part in 45 quadrillion. That’s a tiny error! And yet of course we can all think of situations where that error has compounded to the point where we expect $0.15 to be output and instead we get $0.1499999999. As I’ve often said, use decimals for currency computations.

Rounding problems aside, a lot of work has gone into making sure that doubles can be “round tripped” back to decimal strings when need be. But what if we have some more unusual fraction? Suppose we’ve done some computations in doubles and the result is the double closest to five sevenths:

Console.WriteLine(Rational.FromDouble(5.0 / 7.0)); Console.WriteLine(Rational.FromDouble(5.0 / 7.0) - Rational.FromIntegers(5, 7));

This produces

6433713753386423/9007199254740992 1/63050394783186944

The question then is: suppose we have that double in hand. How do we get back out the fact that this is probably intended to be five sevenths?

You’re all computer programmers. Give it some thought before you read on. To make the problem easier let us suppose without loss of generality that the double we’ve been given is between zero and one.

.

.

.

Probably a bunch of different thoughts occurred to you.

We could start by computing each fraction: 1.0/2.0, 1.0/3.0, 2.0/3.0, 1.0/4.0, … and so on. Compare each fraction to the target, keep track of the current closest fraction, and pick the fraction that is closest after a certain number of rationals go by. Unfortunately this gets inefficient as the denominators get large; there are too many fractions to check.

We could multiply the double by 2.0, 3.0, 4.0, 5.0, and so on, and see if any of the results are close to a whole number. If we multiply by 7.0 and get something close to 5.0 then it was 5 / 7. This seems more efficient, and in fact we could code up a solution that uses either of these techniques even without our rational number type; they can be done entirely in fast double arithmetic.

You probably also thought “can I use a divide and conquer strategy?” Binary search and related strategies are well known for their ability to rapidly converge on a correct solution.

Unfortunately a straight-up binary search of the rationals does not solve our problem. Imagine we have `6433713753386423 / 9007199254740992`

in hand and we are searching for the fraction of small denominator that it “really” is. It does us no good to say:

- It’s between 0 and 1. Let’s try 1/2.
- It is bigger than 1/2.
- Let’s split the difference and try 3/4.
- It is smaller than 3/4.
- Let’s split the difference and try 5/8.
- …

Binary search in this manner does not actually hit all the fractions, and in fact only hits those that we are specifically trying to avoid: those with powers of two in the denominator!

However, there is a way to binary-search the space of rational numbers between zero and one that does hit all of them. **Next time on FAIC:** we’ll use it to solve our problem.

Oh. You stopped at the most interesting moment. Are you going to talk about something like https://en.wikipedia.org/wiki/Farey_sequence ?

Or rather https://en.wikipedia.org/wiki/Stern%E2%80%93Brocot_tree

Yep

It’s a little thing I invented called a “cliffhanger”.

I’m not going to talk about the Stern-Brocot tree directly but I’m going to do the math that it uses.

Binary search? Why not solve it with a multiplication by (trying to not spoil)?

What’s wrong with the original power of two fraction? It’s exactly the number passed in and it’s a rational 🙂 ? To get a better answer, you need to define what better means, which i’m sure you’ll do before actually answering. Anyways, if you’re shooting for the smallest denominator (one way to define “better”), then a whole series of denominators can be identified by looking at the repetition of digits. In a double, you have 53 of them, so this only scales so far, but I’d argue it should be good enough to find most interesting ones, for a certain definition of interesting.

I’m going to guess it’s something similar to this?

public static Rational FromDouble2(double value)

{

int numerator = 0;

int denominator = 2;

while (true)

{

double currentValue = (double)numerator/denominator;

if (value == currentValue)

{

return new Rational(numerator, denominator);

}

else if (value < currentValue)

{

denominator += 1;

}

else

{

numerator += 1;

}

}

}

Hey Eric!

I’m a CS student and I’m really enjoying following your blog. I find implementing these things and understanding data types / C# / algrotihms in depth highly exciting and interesting!

I’ve thought about the problem and came up with this:

public static Rational ApproximateFraction(Rational n)

{

BigInteger x = n.Numerator;

BigInteger y = n.Denominator;

BigInteger r;

BigInteger epsilon = BigInteger.One;

while ((r = BigInteger.Abs(x % y)) > epsilon)

{

x = y;

y = r;

}

return new Rational(n.Numerator / y, n.Denominator / y);

}

It basically uses the extended euclidean algorithm, but instead of checking for dividing without residual, it tolerates a small amount. it checks for approximate equality (abs(x) > epsilon), where epsilon is 1 in this case but could be dynamically defined (high for higher numbers, == 0 for small numbers). It returns 2/5 & 3/5 for both of your examples, one would have to test it for other fractions though, but I I’m fairly confident that by adjusting epsilon you can approximate the original fraction back.

Pingback: Dew Drop – December 8, 2015 (#2147) | Morning Dew

This is a bit old, but did you look at BigRational?

https://bcl.codeplex.com/wikipage?title=BigRational&referringTitle=Home

Not sure how well this is implemented, but it sounds similar to what you’re doing.

I did, and it was sufficiently buggy that I decided to not use it. I may discuss how vexing it is to have an open source project marked as “v1.0” — indicating that it is ready to ship — and marked as coming from Microsoft — indicating that it is trustworthy — where some of the methods are so broken that plainly the code must never have even been run by its authors.

Then again, maybe I did just discuss it sufficiently just there. Suffice to say: I was vexed.

My first thought would be to do something like:

perform the division (giving something like 0.40000000012345)

truncate/round (giving 0.4)

convert floating point number to fraction (getting 4 and 10)

reduce (getting 2 and 5)

I think this probably doesn’t quite work though. Trying it out, you don’t necessarily know how much you should round, and converting to a fraction is not necessarily so easy either. It’s easy for 2/5 but probably wouldn’t work in the general case, and would probably still have the problem you mentioned of being stuck in powers of 2. You probably are looking for a solution that sticks with integers.

Somewhat related, I would be very curious to know what you think of Douglas Crockford’s suggested future data type DEC64.

I think DEC64 combines the worst parts of existing decimal and binary formats. Compared to IEEE754 decimals, it looks simple and elegant, but it suffers from similar problems in addition to new ones.

thanks, I should have looked up criticisms of it before mentioning it. I like Crockford but it doesn’t sound like dec64 is very practical.

I see a lot of uses for decimal fixed-point types; I see far fewer good uses for decimal floating-point types. One can sensibly define semantics such that all operations other than division will either yield exact results or trap, and after performing division and rounding the quotient as desired, code can then compute a residual. I suppose one could do likewise with decimal floating-point types, but in most cases simply using a slightly-longer fixed-point type would be more efficient.

Even IEE754 binary64 is a good enough type for most financial calculations as long as your library doesn’t forget to round the values after each significant calculation step to keep the values consistent and equatable. Yes, it can’t measure world’s GDP down to the last American cent, but US national debt can easily be stored in a double.

It seems like the solution to this is pretty obvious: just pre-generate a lookup table. A Dictionary should work just fine for this task. Because all 2^63 values will be populated ahead of time, it should be trivially easy to look up the correct fraction for any double value you might need.

NB: For those of you who might object to the amount of storage media necessary to implement this, I have done the math, and I believe sufficient storage this lookup table already exists, provided we’re allowed to commandeer every hard drive on Earth.

I like it!

Actually, the storage may be rather easily whittled down to 2^52 by simply storing the values for mantissas in the range 0.5 to 1. For any smaller value simply multiply by a power of two fraction, and multiply the denominator of the computed rational number likewise. Given today’s storage capacities, 2^52 wouldn’t be cheap, but wouldn’t be unobtainable.

Pingback: The dedoublifier, part four | Fabulous adventures in coding

Pingback: 주간닷넷 2015년 12월 15일 - Korea Evangelist - Site Home - MSDN Blogs