4
votes

I am using the C++ Eigen 3 library in my program. In particular, I need to multiply two Eigen sparse matrix and store the result into another Eigen sparse matrix. However, I noticed that if the some entries of the Eigen sparse matrix is smaller than 1e-13, the corresponding entry in the result will be 0 instead of a small number. Say I multiply a sparse identity matrix a and another sparse matrix b. If the topleft entry of b, i.e., b(0,0) is smaller than 1e-13, such as 9e-14, the topleft entry of of the result c=a*b, i.e., c(0,0), is 0 instead of 9e-14.

Here is a code I test,

#include <iostream>
#include <math.h>
#include <Eigen/Sparse>
#include <Eigen/Dense>
#include <Eigen/Eigen>
#include <Eigen/Dense>
#include <Eigen/LU>

using namespace std;
using namespace Eigen;

int main() {

DynamicSparseMatrix<double, RowMajor> a(2,2);
DynamicSparseMatrix<double, ColMajor> b(2,2);
DynamicSparseMatrix<double, RowMajor> c(2,2);
a.coeffRef(0,0) = 1;
a.coeffRef(1,1) = 1;
b.coeffRef(0,0) = 9e-14;
b.coeffRef(1,1) = 1;
cout << "a" << endl;
cout << a << endl;
cout << "b" << endl;
cout << b << endl;
c = a * b;
cout << "c" << endl;
cout << c << endl;
Matrix<double, Dynamic, Dynamic> a2(2,2);
Matrix<double, Dynamic, Dynamic> b2(2,2);
Matrix<double, Dynamic, Dynamic> c2(2,2);
a2(0,0) = 1;
a2(1,1) = 1;
b2(0,0) = 9e-14;
b2(1,1) = 1;
cout << "a2" << endl;
cout << a2 << endl;
cout << "b2" << endl;
cout << b2 << endl;
c2 = a2 * b2;
cout << "c2" << endl;
cout << c2 << endl;

return 1;
}

Here is the strange output

a

1 0

0 1

b

Nonzero entries:

(9e-14,0) (1,1)

Column pointers:

0 1 $

9e-14 0

0 1

c

0 0

0 1

a2

1 0

0 1

b2

9e-14 0

0 1

c2

9e-14 0

0 1

You can see the multiplication of the dense matrices is fine, but the result of the sparse matrices is wrong in the top left entry, and b has a strange output format.

I debugged into the source code of Eigen, but couldn't find where the two numbers are multiplied in the matrix. Any idea?

3
Have you checked that the a and b matrices look like you think?Martin Kristiansen
I just did it and edit the question. b has a very strange output format.Xatan

3 Answers

5
votes

There are several reasons to truncate small values to zero in a linear algebra library.

As you get close to zero, the floating-point format fails to represent calculations well. For example, 9e-14 + 1.0 might equal 1.0 because there is not enough precision to represent the true result.

Getting into matrix-specific stuff, small values can make your matrix ill-conditioned and cause unreliable results. Rounding errors increase as you get close to zero, so a tiny rounding error could produce wildly varying results.

Finally, it helps maintain sparsity of the matrices. If a calculation creates very small nonzeros, you can truncate them and keep the matrix sparse. In a lot of physical applications like finite element analysis, the small values aren't physically significant and they can be removed safely.

I am not familiar with Eigen, but I glanced at their documentation and couldn't find a way to change this rounding threshold. They probably don't want users to make the wrong choice and screw up the reliability of their results. They allow you to do "pruning" manually, but not disable the automatic pruning.

The big picture is: if your calculation relies on floating-point values very close to zero, you should choose a different numerical method. The output will never be reliable. For example, you can replace 3D matrix rotations with quaternion rotations. These methods are called "Numerically stable" and they are a main focus of Numerical Analysis.

2
votes

I am not sure exactly where and how the truncation is done in Eigen, but the epsilon value used for the truncation is defined in NumTraits< Scalar >::dummy_precision() in Eigen/src/Core/NumTraits.h.

0
votes

I'm aware this question is very old now. But for future reference: DynamicSparseMatrix is deprecated now, instead one should use regular SparseMatrix (in uncompressed mode, if necessary).

Also, sparse matrix products will not automatically prune the results anymore [1]. If one actually wants to prune the result of a sparse matrix product now, one needs to write this explicitly like:

C = A*B;                     // don't suppress numerical zeros
C = (A*B).pruned();          // suppress numerical zeros (exact)
C = (A*B).pruned(ref, eps);  // suppress anything smaller than ref*eps

where in the latter call eps is optional and defaults to the numerical epsilon of the scalar in use.