2
votes

I'm developing a time critical algorithm in Java and therefore am not using BigDecimal. To handle the rounding errors, I set an upper error bound instead, below which different floating point numbers are considered to be exactly the same. Now the problem is what should that bound be? Or in other words, what's the biggest possible rounding error that can occur, when performing computational operations with floating-point numbers (floating-point addition, subtraction, multiplication and division)?

With an experiment I've done, it seems that a bound of 1e-11 is enough.

PS: This problem is language independent.

EDIT: I'm using double data type. The numbers are generated with Random's nextDouble() method.

EDIT 2: It seems I need to calculate the error based on how the floating-point numbers I'm using are generated. The nextDouble() method looks like this:

public double nextDouble() {
    return (((long)(next(26)) << 27) + next(27))
        / (double)(1L << 53); }

Based on the constants in this method, I should be able calculate the the biggest possible error that can occur for floating-point number generated with this method specifically (its machine epsilon?). Would be glad if someone could post the calculation .

2
What is the range of magnitudes of your numbers?Patricia Shanahan
Does it matter? Isn't the only thing that matters the decimal part, which is irrelevant of how big the numbers are? But to answer you, it can be different based on the input. The range might be [0-100] or [0-10000].user2340939
en.wikipedia.org/wiki/Machine_epsilon Machine Epsilon is the technical term you are looking for; the Wikipedia page also discusses some ways of calculating machine epsilon. Not sure if this is what you're looking for.lmcphers
Yes, it does matter since they are floating point numbers. If you have a numbers around 1e90, you won't see anything changing near 10e-11, or even 10e30Sami Kuhmonen
Why do you think a single error bound will work for all situations? Numerical analysis is a bit more complicated than that. :-)Mark Dickinson

2 Answers

2
votes

The worst case rounding error on a single simple operation is half the gap between the pair of doubles that bracket the real number result of the operation. Results from Random's nextDouble method are "from the range 0.0d (inclusive) to 1.0d (exclusive)". For those numbers, the largest gap is about 1e-16 and the worst case rounding error is about 5e-17.

Here is a program that prints the gap for some sample numbers, including the largest result of Random's nextDouble:

public class Test {
  public static void main(String[] args) {
    System.out.println("Max random result gap: "
        + Math.ulp(Math.nextAfter(1.0, Double.NEGATIVE_INFINITY)));
    System.out.println("1e6 gap: "
        + Math.ulp(1e6));
    System.out.println("1e30 gap: "
        + Math.ulp(1e30));
  }
}

Output:

Max random result gap: 1.1102230246251565E-16
1e6 gap: 1.1641532182693481E-10
1e30 gap: 1.40737488355328E14

Depending on the calculation you are doing, errors can accumulate across multiple operations, giving bigger total rounding error than you would predict from this simplistic single-operation approach. As Mark Dickinson said in a comment, "Numerical analysis is a bit more complicated than that."

0
votes

This depends on:

  1. Your algorithm
  2. the magnitude of involved numbers

For example, consider the function f(x) = a * ( b - ( c+ d)) No big deal, or is it?

It turns out it is when d << c, b = c and a whatever, but let's just say it's big.

Let's say:

a = 10e200
b = c = 5
d = 10e-90

This is totally made up, but you get the point. The point is, the difference of magnitude between c and d mean that

c + d = c (small rounding error because d << c)
b - (c + d) = 0 (should be 10e-90)
a * (b - (c + d)) = 0 (where it really should be 10e110)

Long story short, some operations (notably subtractions) (can) kill you. Also, it is not so much the generating function that you need to look at, it is the operations that you do with the numbers (your algorithm).