4
votes

I just read a book about javascript. The author mentioned a floating point arithmetic rounding error in the IEEE 754 standard.

For example adding 0.1 and 0.2 yields 0.30000000000000004 instead of 0.3.

so (0.1 + 0.2) == 0.3 returns false.

I also reproduced this error in c#.

So these are my question is:

How often this error occurs? What is the best practice workaround in c# and javascript? Which other languages have the same error?

4
There's a book that calls this a 'rounding error' ??AakashM
It's not an "error" since it is by design, but it's a nuisance. I'm sure there are some cases where this would be useful behaviour, but in practice (at least in JavaScript) I think it would be much more useful if the default representation was as a "proper" (exact base 10) decimal. I've literally never wanted a binary floating point number.nnnnnn
The book is "JavaScript for Web Developers" 2nd Edition by Nicholas C. Zakas. The "error" is described on page 33.Henk

4 Answers

10
votes

It's not an error in the language. It's not an error in IEEE 754. It's an error in the expectation and usage of binary floating point numbers. Once you understand what binary floating point numbers really are, it makes perfect sense.

The best practice in C# is to use System.Decimal (aka decimal) which is a decimal floating point type, whenever you're dealing with quantities which are naturally expressed in decimal - typically currency values.

See my articles on .NET binary floating point and decimal floating point for more information.

4
votes

The error is NOT a rounding error, it's simply that some values cannot be exactly represented by the IEEE 754 standard. see Jon Skeet's article on binary floating point numbers in .net for further reading.

For dealing with numbers like your example (base-10) you should be using the decimal datatype in C# as it can represent these numbers exactly, so you get values you'd expect.

A typical way is to define some epsilon value, and check whether the result is within targetvalue +- epsilon:

double const epsilon = 0.000001; // or whatever

if(valueA >= valueB - epsilon && valueA <= valueB + epsilon)
{
    // treat as valueA = valueB
}
4
votes

The closest representations of those three numbers in double precision floating point are:

  • 0.1 --> 0.10000000000000001 = D(3FB99999 9999999A)
  • 0.2 --> 0.20000000000000001 = D(3FC99999 9999999A)
  • 0.3 --> 0.29999999999999999 = D(3FD33333 33333333)

The next larger representable number beyond 0.29999999999999999 is:

  • 0.30000000000000004 = D(3FD33333 33333334)

The closest representation of

  • 0.10000000000000001 + 0.20000000000000001 is 0.30000000000000004

So you are comparing 0.29999999999999999 and 0.30000000000000004. Does this give you more insight as to what is happening?

As far as the use of decimal instead of binary representations, that doesn't work either. Take for example one third:

  • 1/3 = 0.3333333333333333333333333333333...

which has no exact representation even using decimal digits. Any computations should always take representation error into account.

3
votes

Well now that you know about the issue, the workaround is to keep it in mind when you evaluate floating point numbers.

Your example is not exactly something you would use in a real program. But there are ways to evaluate such things if really needed, one example (in C#) could be...

if((0.1f + 0.2f).ToString("0.0") == "0.3")

this will be true, there are probably many other ways. Point is, if you ever have this situations then remember the potential issues. It is this kind of experience that makes for a better developer/programmer