I'm doing some large stochastic matrices (at least 1000x1000) calculation in C++, using the Eigen Library, my code consists of the following functions :
Eigen::VectorXd grid(...); initializes (element by element) a sorted vector of log-normally distributed values, using the quicksort algorithm and the ran1 algorithm, let's say of size N, all matrices are then of size NxN.
Eigen::VectorXd conditionGrid(...); a loop that returns a vector containing the grid vector minus a time dependent value.
Eigen::MatrixXd xjkMatrix(...); matrix constructed from the two vectors through a loop.
Eigen::MatrixXd qzMatrix(...); uses the xjk matrix to construct a new one using a probability density function, element by element.
Eigen::MatrixXd q1zMatrix(...); uses the xjk matrix too, to construct a new one using a probability density function, element by element.
Eigen::MatrixXd qjkMatrix(...); combination of the qz and the grid, element by element loop.
Eigen::MatrixXd q1jkMatrix(...); combination of the qz, q1z and the grid, element by element loop.
Eigen::MatrixXd mjkMatrix(...); sums elements from qjk and q1jk, element by element loop.
Eigen::MatrixXd globalMatrix(...); loops all the above functions (120 times generally) except the grid, which is fixed, and multiplies the 120 mjk matrices to obtain a global one.
A global matrix 200x200 takes about 20 seconds of calculations, 500x500 is about 200 seconds. How can I make my code run faster, and optimize my matrix multiplications ? (I have tried sparse matrices, but it took even longer).
I'm using MinGW64.