2
votes

I construct a 100*100 matrix k and want to use numpy.linalg.eig to diagonalize it.

k=np.zeros((100,100))
np.fill_diagonal(k,-2)
np.fill_diagonal(k[1:,:-1],1.5)
np.fill_diagonal(k[:-1,1:],0.5)

when I try smaller matrix, such as

w,v=np.linalg.eig(k[:10,:10])

the eigenvalue w and eigenvector v are real. But when I try bigger matrix or the whole complete matrix

w,v=np.linalg.eig(k)

w and v turn out to be complex numbers, and the imaginary part is non-negligible.

I also try scipy.linalg.eig, it has similar issue.

I want to take the natural logarithm of the eigenvalues and eigenvectors. There's no physical meaning of complex numbers in my model.

How can I only have independent real number eigenvalues and eigenvectors? If not, how to change the complex eigenvalues and eigenvectors to real ones by python?

2
Can you prove any square tri-diagonal matrix must have real eigenvalues? If you can't, there isn't much you can do. According to wikipedia, a real symmetric tridiagonal matrix has real eigenvalues, and all the eigenvalues are distinct (simple) if all off-diagonal elements are nonzero. Your matrix doesn't seem to fall into this category, so these are likely the eigenvalues of that matrix. You haven't done anything wrong, and can't change that.Demetri Pananos
Interestingly, all eigenvalues are real up to the size 64 by 64. At size 65 by 65 imaginary parts appear, and they are not very small: about 0.01. Quite a sudden change, which makes me suspicious. If the eigenvalues are correct, and you don't want complex one, then the only way is to filter out those with imaginary part over some threshold (1e-16 or so).user6655984
imaginary part is large enough so that it's non-negligible. I just wonder if python can change the pairs of complex eigenvalues/vetors to real ones.kinder chen
According to Wikipedia, a Toepelitz tridiagonal matrix (which this appears to be) should have real eignevalues if the product of the off-diagonal values is positive (in this case 0.5*1.5 = .75)Daniel F
@DanielF Good find! The eigenvectors can also be found in closed form, so the OP doesn't need NumPy's eig (which is evidently struggling here) at all.user6655984

2 Answers

2
votes

@Daniel F and @FTP were quicker than me, see their comments, but since I have the code sitting here I may as well share it:

import numpy as np
from scipy import sparse

def tri_toep_eig(a, b, c, n):
    evals = a + 2*np.sqrt(b*c) * np.cos(np.pi * np.arange(1, n+1) / (n+1))
    evecs = np.sin(np.outer(np.arange(1, n+1) * np.pi / (n+1),
                            np.arange(1, n+1))) \
        * np.sqrt(b/c)**np.arange(n)[:, None]
    return evals, evecs

def tri_toep(a, b, c, n):
    return sparse.dia_matrix((np.outer((b, a, c), np.ones((n,))),
                              (-1, 0, 1)), (n, n))
def check(a, b, c, n):
    evals, evecs = tri_toep_eig(a, b, c, n)
    tt = tri_toep(a, b, c, n)
    for eva, eve in zip(evals, evecs.T):
        assert np.allclose(tt @ eve, eva * eve)

check(-2, 0.5, 1.5, 100)
0
votes

Transpose fixes eigenvalues

Apparently, LAPACK hates nonsymmetric tridiagonal matrices where the larger off-diagonal elements are below the diagonal. Using the transposed matrix, in which the larger elements are above the diagonal, results in real eigenvalues. (Theoretically, a matrix and its transpose have the same eigenvalues.) In addition to being real, they agree with the theoretical values up to sorting and reasonable errors.

a, b, c, n = -2, 0.5, 1.5, 100
k = np.zeros((n, n))
np.fill_diagonal(k, a)
np.fill_diagonal(k[:-1, 1:], b)
np.fill_diagonal(k[1:, :-1], c)

theory = a - 2*np.sqrt(b*c) * np.cos(np.pi * np.arange(1, n+1) / (n+1))
computed = np.sort(np.linalg.eig(k.T)[0])
print(np.max(np.abs(theory - computed)))

This prints 6.183001044490766e-08, the largest discrepancy between the computed and theoretical eigenvalue. Without transposing T, this error goes up to 0.26.

Eigenvectors

You also wanted eigenvectors. The eigenvectors returned by np.eig for the transposed matrix are the left eigenvectors for the original matrix: that is, they satisfy vec.dot(k) = lam*vec instead of k.dot(vec) = lam*vec. If you want to get the right eigenvectors for the original matrix, use SciPy's eig:

from scipy import linalg as la
evals, right_evects, left_evects = (np.real(_) for _ in la.eig(k.T, left=True, right=True))

SciPy's eigensolver is different from NumPy's in that it returns the eigenvectors and eigenvalues with +0j attached; it considers them complex but correctly evaluates the imaginary part to 0. I truncated that imaginary part above. Note that SciPy return order is "evals, left, right" but since k was transposed, I switched left and right around.

Let's check these eigenvectors:

np.max([np.linalg.norm(k.dot(vec) - ev*vec) for ev, vec in zip(evals, right_evects.T)])

returns 1.845213984555825e-14, not bad. The transpose on the array with eigenvectors is because zip picks rows from a matrix, and we need columns.

Bonus content

So... problem solved? Well, I didn't say I can diagonalize your matrix. Trying to invert the matrix formed by either left or right eigenvectors looks like a losing proposition; the inverse is horrible.

Also, we should not trust the above test for eigenvectors too much. It gave a tiny error for the right eigenvectors... let's try it on the wrong eigenvectors, those with nontrivial imaginary parts.

wrong_evals, wrong_evects = np.linalg.eig(k)
np.max([np.linalg.norm(k.dot(vec) - ev*vec) for ev, vec in zip(wrong_evals, wrong_evects.T)])

This returns 1.7136373586499598e-14. Wrong eigenvectors are even better than the real thing!