I am trying to solve the classical least squares problem argmin (Ax - b)^2 using Eigen. I know that the system is overdetermined (i.e., m >= n where A is m x n) and that A has full rank (n). I have opted to use the sparse QR part of Eigen, since A is sparse itself. I got as far as this:
#include <Eigen/Sparse>
#include <Eigen/SparseQR>
void solve_least_squares(const Eigen::SparseMatrix<double>& matrix,
const Eigen::VectorXd& rhs)
{
int m = matrix.rows();
int n = matrix.cols();
assert(m >= n);
assert(matrix.isCompressed());
Eigen::SparseQR<Eigen::SparseMatrix<double>, Eigen::COLAMDOrdering<int>> decomposition(matrix);
Eigen::VectorXd t = (decomposition.matrixQ().transpose() * rhs).topRows(n);
std::cout << t << std::endl;
Eigen::VectorXd s = decomposition.matrixR().topRows(n).triangularView<Eigen::Upper>().solve(t);
return decomposition.colsPermutation().inverse()*s;
}
But I am wondering if this is the most efficient implementation: Firstly, Eigen seems to determine the full QR decomposition rather than a reduced one (Q is m x m rather than m x n). This seems wasteful, since I only need the top n rows.
In the dense case, there is any example in the HouseholderQR class:
MatrixXf A(MatrixXf::Random(5,3)), thinQ(MatrixXf::Identity(5,3)), Q;
A.setRandom();
HouseholderQR<MatrixXf> qr(A);
Q = qr.householderQ();
thinQ = qr.householderQ() * thinQ; // <- here
std::cout << "The complete unitary matrix Q is:\n" << Q << "\n\n";
std::cout << "The thin matrix Q is:\n" << thinQ << "\n\n";
Here, a matrix multiplication is use to obtain the reduced Q, which seems even more inefficient than slicing the full matrix.
Since the (dense) SVD decomposition of Eigen provides an option to compute a thin SVD, is there a similar option for QR decompositions?