Consider the following weighted solution to the normal equation for a least squares inverse problem:
m = inv(G'*W'*W*G)*G'*W'*W*d
I would like to set up the weighting matrix W, which is a square diagonal matrix with weights on the diagonal.
As I have a large number of data points in d (10⁷), my system matrix G is also large, but only in one dimension (as I have far more data points than model parameters). In the case of 6 model parameters, G is of size (10⁷ × 6). Hence, W has to be of size (10⁷ × 10⁷). However, it is sparse, with only 10⁷ non-zero entries (the weights).
To reduce memory, I use sparse on W.
To assign weights, I do the following
d = [d1;d2];
W = sparse(length(d),length(d))
w1 = length(d2)/length(d);
w2 = length(d1)/length(d);
W(1:length(d)+1:length(d)*length(d1)) = w1;
W(length(d)*length(d1)+1:length(d)+1:end) = w2;
d1 and d2 are column vectors with observations.
This will assign weights to the diagonal, but it is awfully slow.
My question:
Can I either
- Somehow speed up the assignments of weights to the diagonal, or
- Re-write
m = inv(G'*W'*W*G)*G'*W'*W*dso that I do not have to set-upWat all?
Note 1: In the shown weights are two different constants, but in practice they will vary allowing the diagonal!
Note 2: The bottle neck of the code is indeed setting up W, not the inversion itself, as the inverted matrix is only of size (6 × 6).
d1andd2? Please post runnable code. Also,d = [d1;d2]impliesw1andw2are just ones andWiseye? Why two diagonal assignments (last two lines)? - Luis MendoW = sparse([1:numel(d1) 1:numel(d2)], 1:numel(d), [w1; w2], numel(d), numel(d));, wherew1andw2are column vectors? This replaces the intiallizationW = sparse(length(d),length(d));and the two assignment lines - Luis MendoWis truly diagonal, then you can better dobsxfun(@times, G, [length(d2) length(d1)]/length(d))to computeG.' · W.'. What areWGandWd? - Rody Oldenhuis