11
votes

I have a matrix such as this example (my actual matrices can be much larger)

A = [-1 -2   -0.5;
      0  0.5  0;
      0  0   -1];

that has only two linearly-independent eigenvalues (the eigenvalue -1 is repeated). I would like to obtain a complete basis with generalized eigenvectors. One way I know how to do this is with Matlab's jordan function in the Symbolic Math toolbox, but I'd prefer something designed for numeric inputs (indeed, with two outputs, jordan fails for large matrices: "Error in MuPAD command: Similarity matrix too large."). I don't need the Jordan canonical form, which is notoriously unstable in numeric contexts, just a matrix of generalized eigenvectors. Is there a function or combination of functions that automates this in a numerically stable way or must one use the generic manual method (how stable is such a procedure)?

NOTE: By "generalized eigenvector," I mean a non-zero vector that can be used to augment the incomplete basis of a so-called defective matrix. I do not mean the eigenvectors that correspond to the eigenvalues obtained from solving the generalized eigenvalue problem using eig or qz (though this latter usage is quite common, I'd say that it's best avoided). Unless someone can correct me, I don't believe the two are the same.


UPDATE 1 – Five months later:

See my answer here for how to obtain generalized eigenvectors symbolically for matrices larger than 82-by-82 (the limit for my test matrix in this question).

I'm still interested in numeric schemes (or how such schemes might be unstable if they're all related to calculating the Jordan form). I don't wish to blindly implement the linear algebra 101 method that has been marked as a duplicate of this question as it's not a numerical algorithm, but rather a pencil-and paper method used to evaluate students (I suppose that it could be implemented symbolically however). If anyone can point me to either an implementation of that scheme or a numerical analysis of it, I'd be interested in that.

UPDATE 2 – Feb. 2015: All of the above is still true as tested in R2014b.

1
@natan: then I suggest you mark this question as a duplicate.Jonas
@natan: Yes, I saw that question/answer. It's not a duplicate. It's basically a manual procedure that replicates what is shown in many textbooks and on Wikipedia. I know how to obtain the generalized eigenvalues myself - I'm looking for something a bit more self contained and that uses builtin routines in order to be careful about numeric issues. Do you know of anything or any algorithms? Also, FYI, eigensys doesn't exist any more in R2013a, eig(sym(A)) must be used.horchler
@DavidHeffernan: The error crops up at exactly 83-by-83 for a particular test matrix.horchler
@DavidHeffernan: In terms of the ultimate size I'm interested in, these matrices represent the behavior of the nodes of a dynamical neural network, so the system could easily have 100+ dimensions. These matrices, Jacobians, are very sparse, so it'd be nice to have a method that could take advantage of that.horchler
@reverse_engineer: Yes, as far as I know, there is no numerically stable algorithm for the Jordan form. But, I'm not interested in the Jordan form itself, just a stable means of obtaining the generalized eigenvalues directly, such that I can form a complete basis – or are the two inextricably linked?horchler

1 Answers

1
votes

As mentioned in my comments, if your matrix is defective, but you know which eigenvectors/eigenvalue pair you want to consider as identical given your tolerance, you can proceed as with this example below:

% example matrix A:
A = [1 0 0 0 0; 
     3 1 0 0 0; 
     6 3 2 0 0; 
     10 6 3 2 0;
     15 10 6 3 2]
% Produce eigenvalues and eigenvectors (not generalized ones)
[vecs,vals] = eig(A)

This should output:

vecs =

     0         0         0         0    0.0000
     0         0         0    0.2236   -0.2236
     0         0    0.0000   -0.6708    0.6708
     0    0.0000   -0.0000    0.6708   -0.6708
1.0000   -1.0000    1.0000   -0.2236    0.2236


vals =

 2     0     0     0     0
 0     2     0     0     0
 0     0     2     0     0
 0     0     0     1     0
 0     0     0     0     1

Where we see that the first three eigenvectors are almost identical to working precision, as are the two last ones. Here, you must know the structure of your problem and identify the identical eigenvectors of identical eigenvalues. Here, eigenvalues are exactly identical, so we know which ones to consider, and we will assume that corresponding vectors 1-2-3 are identical and vectors 4-5. (In practice you will likely check the norm of the differences of eigenvectors and compare it to your tolerance)

Now we proceed to compute the generalized eigenvectors, but this is ill-conditioned to solve simply with matlab's \, because obviously (A - lambda*I) is not full rank. So we use pseudoinverses:

genvec21 = pinv(A - vals(1,1)*eye(size(A)))*vecs(:,1);
genvec22 = pinv(A - vals(1,1)*eye(size(A)))*genvec21;
genvec1 = pinv(A - vals(4,4)*eye(size(A)))*vecs(:,4);

Which should give:

genvec21 =

   -0.0000
    0.0000
   -0.0000
    0.3333
         0

genvec22 =

    0.0000
   -0.0000
    0.1111
   -0.2222
         0

genvec1 =

    0.0745
   -0.8832
    1.5317
    0.6298
   -3.5889

Which are our other generalized eigenvectors. If we now check these to obtain the jordan normal form like this:

jordanJ = [vecs(:,1) genvec21 genvec22 vecs(:,4) genvec1];
jordanJ^-1*A*jordanJ

We obtain:

ans =

2.0000    1.0000    0.0000   -0.0000   -0.0000
     0    2.0000    1.0000   -0.0000   -0.0000
     0    0.0000    2.0000    0.0000   -0.0000
     0    0.0000    0.0000    1.0000    1.0000
     0    0.0000    0.0000   -0.0000    1.0000

Which is our Jordan normal form (with working precision errors).