3
votes

So far I knew that eigen does not provide any special optimized operations for triangular or symmetric matrices. And also it does not use any packed storage for those matrices. Both triangular and symmetric matrices are considered as normal matrices. But eigen has the concept of view. But in the documentation of Eigen they mentioned that they perform optimized operations for both symmetric and triangular matrices. And also I don't understand what they mean by The opposite triangular part is never referenced and can be used to store other information

TriangularView gives a view on a triangular part of a dense matrix and allows to perform optimized operations on it. The opposite triangular part is never referenced and can be used to store other information.

They mentioned same thing for symmetric matrices

Just as for triangular matrix, you can reference any triangular part of a square matrix to see it as a selfadjoint matrix and perform special and optimized operations. Again the opposite triangular part is never referenced and can be used to store other information.

So my question is:

  1. Does eigen consider symmetric and triangular matrices as special matrix or does it consider it as normal matrix like any other eigen matrix?

  2. Does Eigen do packed storage or special compact storage?

  3. What does this line mean The opposite triangular part is never referenced and can be used to store other information?

  4. Does eigen perform any optimized operations for both triangular and symmetric matrix?

Although it seems there are 4 questions here but all are closely related. Yes/no answer would be okay for me except Question 3.

1

1 Answers

4
votes

Generally we can say that Eigen is mainly optimized for speed but not for storage space.

  1. As you have noticed, the view concept is the special arrangement for symetric and triagnular matrices;
  2. No, saving 50% memory space for triangular matrix with special storage scheme seems not very attractive to Eigen;
  3. When storing a triangular matrix in general dense matrix scheme, 50% of the space is not used. You can use the unused part to store something else;
  4. Yes, you can use symmetric/selfadjoint view to save 50% running time for operations like A * A^T

This code computes A * A^T in different ways, with a demo on your questions 3. You could compare the results and measure the running time with a reasonable large dimension.

#include <iostream>
#include "Eigen/Eigen"

int main() {
  using namespace Eigen;

  const int n = 5;
  Eigen::MatrixXd a(n, n), b(n, n), c(n, n), d(n, n);
  a.setRandom();
  b.setZero();
  c.setZero();
  d.setZero();
  d.bottomLeftCorner(n / 2, n / 2).setConstant(100);

  std::cout << "original d =\n" << d << std::endl;

  b = a * a.transpose();
  std::cout << "b=\n" << b << std::endl;

  c.selfadjointView<Upper>().rankUpdate(a);
  std::cout << "c=\n" << c << std::endl;

  d.selfadjointView<Upper>().rankUpdate(a);
  std::cout << "d=\n" << d << std::endl;

  return 0;
}

Output:

original d =
  0   0   0   0   0
  0   0   0   0   0
  0   0   0   0   0
100 100   0   0   0
100 100   0   0   0
b=
  2.45959  0.767369   1.13659 -0.511436   1.29631
 0.767369  0.557756  0.124955 -0.480089  0.434794
  1.13659  0.124955   1.39678 -0.660623   0.87062
-0.511436 -0.480089 -0.660623   1.43841 -0.103395
  1.29631  0.434794   0.87062 -0.103395   2.02476
c=
  2.45959  0.767369   1.13659 -0.511436   1.29631
        0  0.557756  0.124955 -0.480089  0.434794
        0         0   1.39678 -0.660623   0.87062
        0         0         0   1.43841 -0.103395
        0         0         0         0   2.02476
d=
  2.45959  0.767369   1.13659 -0.511436   1.29631
        0  0.557756  0.124955 -0.480089  0.434794
        0         0   1.39678 -0.660623   0.87062
      100       100         0   1.43841 -0.103395
      100       100         0         0   2.02476