I'll first state my question and situation succinctly, then give a few details on the progress I've made so far in addressing the issues.
The function eig()
behaves differently on different combinations of computer architecture and MATLAB version, in two ways I've noticed. First, the order of the eigenvalues and eigenvectors in the output is different, for the same exact code. Now, I've read in several places that MATLAB does NOT guarantee any specific order of those in the output, so one should always sort afterwards just to be safe, so even though I think this is part of a larger problem, I'll let that slide. The other way in which the behavior differs is that the eigenvalues are wrong in the cases where their order is not ascending. This is how I know they're wrong: I am dealing with complex Hermitian matrices of modest size, and the eigenvalues I get are complex, but they should be real for Hermitian matrices. On the architecture/version combinations where the order is ascending, the eigenvalues are also real as they should be. When they're complex, the imaginary parts are on the order of machine epsilon, so it's clear they're not supposed to be there at all. So, the hacky way around this is just to take the real part just in case it's complex when it shouldn't be, and then sort. A practical problem with this is that, when eig()
is utilizing its routine that thinks the matrix is non-Hermitian, it takes twice as long and uses twice as much memory, presumably because of complex as opposed to real numbers. So, the questions are, why is this, and do we have a way around it?
Sorry for the long introduction. One thing I noticed is that, on the architecture/version combinations with the correct eigenvalues, it also consistently gives me the correct answer if I give a complex Hermitian matrix directly to ishermitian()
. And when the eigenvalues are incorrect, ishermitian()
is also incorrect. Some simple examples follow. So, my thought is that eig()
is testing whether the matrix is Hermitian the same way ishermitian()
does.
On the architecture/version combinations with incorrect eigenvalues, ishermitian()
is still correct for something simple:
>> test=[1 2 3]+[4 5 6]*1i;
>> mat=test'*test;
>> ishermitian(mat)
ans =
1
The outer product of a complex vector and its conjugate transpose is a Hermitian matrix by definition. So we can try this again with a matrix which is around the same size as the ones I am dealing with. From the same computer:
>> test=randn(1,64)+randn(1,64)*1i;
>> mat=test'*test;
>> ishermitian(mat)
ans =
0
And we can see the corresponding problem with eig()
:
>> vals=eig(mat);
>> whos vals
Name Size Bytes Class Attributes
vals 64x1 1024 double complex
And now, identical code on a different computer:
>> test=randn(1,64)+randn(1,64)*1i;
>> mat=test'*test;
>> ishermitian(mat)
ans =
1
>> vals=eig(mat);
>> whos vals
Name Size Bytes Class Attributes
vals 64x1 512 double
I've tested this on a variety of systems, and I am not noticing any rule about what architectures or versions are responsible for this, so I won't bore with all those details. But I'm curious how frequent one result is versus the other. In my experience so far, it seems about half and half. Strangely, on the same computer:
>> test2=test';
>> mat=test2*test;
>> ishermitian(mat)
ans =
0
>> vals=eig(mat);
>> whos vals
Name Size Bytes Class Attributes
vals 64x1 1024 double complex
So if you assign the transpose to another variable, there is some precision problem like on the other computer, but if you don't, it works. Am I missing something about why this is happening, and/or is there a way around it?
isHermitian
is an exact test. It is not a good choice for matrices that are constructed via floating point math as in your example. You'll need to create your own test with some form of "tolerance" for cases like this. Or consider putting in a feature request to The MathWorks. – horchler