7
votes

Consider a polynomial such as:

p = [1 -9 27 -27];

obviously the real root is 3:

polyval(p,3)

0

While using the roots function

q = roots([1 -9 27 -27]);

with format short:

q =

   3.0000 + 0.0000i
   3.0000 + 0.0000i
   3.0000 - 0.0000i

and to check if the the roots are real:

bsxfun(@eq,ones(size(q)),isreal(q))

0
0
0

And even worse with format long I get:

roots([1 -9 27 -27])

ans =

  3.000019414068325 + 0.000000000000000i
  2.999990292965843 + 0.000016813349886i
  2.999990292965843 - 0.000016813349886i

How can I calculate roots of a polynomial correctly?

3
Minor note: your check to see if the roots are real is not correct. isreal(q) gives false if the array q is complex. But some entries may have zero imaginary part. In fact, isreal(q) gives false, whereas for x = q(:).', isreal(x), end gives true, false, false. The first entry of q is real, the others are not, and q as a whole is not real - Luis Mendo

3 Answers

6
votes

You may have to work symbolically. You need the Symbolic Math Toolbox for that.

  1. Define the polynomial as a symbolic function. You can (a) use poly2sym to generate the symbolic polynomial from its coefficients. Or (b) better yet, define the symbolic function directly using a string. That way you avoid the loss of accuracy that may result from representing the coefficients as double.

  2. Use solve, which symbolically solves algebraic equations.

Code with option (a):

p = [1 -9 27 -27];
ps = poly2sym(p);
rs = solve(ps);

Code with option (b):

ps = sym('x^3-9*x^2+27*x-27');
rs = solve(ps);

In either case, the result is symbolic:

>> rs
rs =
 3
 3
 3

You may want to convert to numeric values using

r = double(rs);

In your example, this gives

>> format long
>> r
r =
     3
     3
     3
5
votes

This is due to floating point inaccuracies. Have a look at this post for details: Is floating point math broken?

One thing you can do is to round off the answer/s upto some decimal places like this:

q = round(roots([1 -9 27 -27]), 4) % rounding off to 4 decimal places
0
votes

This is very specific to your polynomial. In general you have to expect that a root of multiplicity m has a relative floating point error of magnitude mu^(1/m) where mu=1e-15 is the machine precision. In this case the multiplicity is m=3, thus the error in the range of 10^(-5). Which is exactly the scale of the errors in your results.

That this happens here with clear integer coefficients is a result of the method matlab uses, it computes the eigenvalues of the companion matrix, and the eigenvalue algorithm will transform the integer matrix into a proper floating point matrix with corresponding rounding errors in the first step of the algorithm.

Other algorithms have empirical tests for multiplicities and associated clusters of approximative roots and thus are able to correct this error. In this case you could achieve this by replacing each root by the average of the 3 roots.


Mathematically, you have some polynomial

p(x)=(x-a)^m*q(x)

with a root at x=a of multiplicity m. Due to floating point operations, the solver effectively "sees" a polynomial

p(x)+e(x)

where the coefficients of e(x) have a size that is the magnitude of the coefficients of p times mu. Close to the root a, this perturbed polynomial can be effectively replaced by

(x-a)^m*q(a)+e(a) = 0  <==>  (x-a)^m = -e(a)/q(a)

so that the solutions form an m-pointed regular polygon or star centered at a with radius |e(a)/q(a)|^(1/m) which should be in the region of |a|*mu^(1/m).