1
votes

I'll first state my question and situation succinctly, then give a few details on the progress I've made so far in addressing the issues.

The function eig() behaves differently on different combinations of computer architecture and MATLAB version, in two ways I've noticed. First, the order of the eigenvalues and eigenvectors in the output is different, for the same exact code. Now, I've read in several places that MATLAB does NOT guarantee any specific order of those in the output, so one should always sort afterwards just to be safe, so even though I think this is part of a larger problem, I'll let that slide. The other way in which the behavior differs is that the eigenvalues are wrong in the cases where their order is not ascending. This is how I know they're wrong: I am dealing with complex Hermitian matrices of modest size, and the eigenvalues I get are complex, but they should be real for Hermitian matrices. On the architecture/version combinations where the order is ascending, the eigenvalues are also real as they should be. When they're complex, the imaginary parts are on the order of machine epsilon, so it's clear they're not supposed to be there at all. So, the hacky way around this is just to take the real part just in case it's complex when it shouldn't be, and then sort. A practical problem with this is that, when eig() is utilizing its routine that thinks the matrix is non-Hermitian, it takes twice as long and uses twice as much memory, presumably because of complex as opposed to real numbers. So, the questions are, why is this, and do we have a way around it?

Sorry for the long introduction. One thing I noticed is that, on the architecture/version combinations with the correct eigenvalues, it also consistently gives me the correct answer if I give a complex Hermitian matrix directly to ishermitian(). And when the eigenvalues are incorrect, ishermitian() is also incorrect. Some simple examples follow. So, my thought is that eig() is testing whether the matrix is Hermitian the same way ishermitian() does.

On the architecture/version combinations with incorrect eigenvalues, ishermitian() is still correct for something simple:

>> test=[1 2 3]+[4 5 6]*1i; 
>> mat=test'*test;
>> ishermitian(mat)

ans =

     1

The outer product of a complex vector and its conjugate transpose is a Hermitian matrix by definition. So we can try this again with a matrix which is around the same size as the ones I am dealing with. From the same computer:

>> test=randn(1,64)+randn(1,64)*1i;
>> mat=test'*test;
>> ishermitian(mat)

ans =

     0

And we can see the corresponding problem with eig():

>> vals=eig(mat);       
>> whos vals
  Name       Size            Bytes  Class     Attributes

  vals      64x1              1024  double    complex   

And now, identical code on a different computer:

>> test=randn(1,64)+randn(1,64)*1i;
>> mat=test'*test;
>> ishermitian(mat)

ans =

     1

>> vals=eig(mat);
>> whos vals
  Name       Size            Bytes  Class     Attributes

  vals      64x1               512  double              

I've tested this on a variety of systems, and I am not noticing any rule about what architectures or versions are responsible for this, so I won't bore with all those details. But I'm curious how frequent one result is versus the other. In my experience so far, it seems about half and half. Strangely, on the same computer:

>> test2=test';
>> mat=test2*test;
>> ishermitian(mat)

ans =

     0

>> vals=eig(mat);
>> whos vals
  Name       Size            Bytes  Class     Attributes

  vals      64x1              1024  double    complex   

So if you assign the transpose to another variable, there is some precision problem like on the other computer, but if you don't, it works. Am I missing something about why this is happening, and/or is there a way around it?

2
It might have something to do with extendend precision and the kind of sse that is available on the machine and version of Matlab.yar
Which different versions? What differences are there between the computers (OS and CPU)? In general, you should never rely on floating point calculation being exactly equal across CP architectures, OSes, and compilers.horchler
Also, isHermitian is an exact test. It is not a good choice for matrices that are constructed via floating point math as in your example. You'll need to create your own test with some form of "tolerance" for cases like this. Or consider putting in a feature request to The MathWorks.horchler
Thanks. Yes I'm going to need to do something with tolerance. One example of systems differing in this regard are two identical computers, MacBook Pros, Intel Core i7. On one 2015b is installed and it works as expected. On the other 2016a is installed, and it is working the problematic way.Karl

2 Answers

1
votes

To overcome this problem, try to round to some (small) degree or just explicitly make the matrix hermitian by copying (with no numeric operations involved).

1
votes

I was about to write something to round the numbers by some small amount, but I came across this random thing that actually seems to solve the problem everywhere:

>> test=randn(1,64)+randn(1,64)*1i;
>> mat=test'*test;             
>> ishermitian(mat)

ans =

     0

>> vals=eig(mat);
>> whos vals
  Name       Size            Bytes  Class     Attributes

  vals      64x1              1024  double    complex   

>> mat=(mat+mat')/2;
>> ishermitian(mat)

ans =

     1

>> vals=eig(mat);
>> whos vals
  Name       Size            Bytes  Class     Attributes

  vals      64x1               512  double              

I'm now getting all the expected results everywhere, and it's taking half the time as expected, etc. Clearly mat=(mat+mat')/2; is somehow solving a rounding/precision problem, but it's not intuitive to me why this works. This operation in theory should be giving me back exactly what I already had because mat==mat' if it's Hermitian but we know mat~=mat' in this case for whatever reason. Any ideas as to why this operation is working like a small round-off?