0
votes

Let's say I have a matrix that is sparse, except for blocks along the diagonal (of a fixed size).

Eigen::SparseMatrix<float> lhs;

lhs is approximately 2% non-sparse, but may be very large. Then, let's say I have a vector:

Eigen::MatrixXf rhs = Eigen::MatrixXf::Random(SomeSz, 1);

For the moment, let's assume it's dense.

I want to efficiently compute:

result.noalias() = lhs * rhs;

If I was to compile with -O3 -march=native -mtune=native (with Clang), would this produce an optimal result?

Also, what if rhs was sparse:

Eigen::SparseMatrix<float> rhs; rhs.resize(SomeSz, 1); rhs.reserve(SomeSz/SomeFactor);

Is:

result = lhs * rhs;

still optimal/suboptimal?

I guess what I'm asking is whether or not Eigen will take advantage of the block-sparse structure and only perform the necessary computations.

1

1 Answers

0
votes

First of all, in the dense case for the rhs, if rhs is a vector, then please tell it to Eigen by using VectorXf. Then, with Eigen 3.3, you can take advantage of multi-threading by compiling with -fopenmp and using a row-major storage of the lhs.

In the sparse case, yes Eigen will take into consideration the sparsity of both the lhs and rhs. The complexity will really be rhs.nonZeros()*average_nnz_per_col_of_lhs, in contrast to rhs.size()*average_nnz_per_col_of_lhs with a dense rhs. So if rhs is really sparse, then it might be worth a try. Only the useful column of lhs will be considered. In this case, better keep the lhs column-major.