Transpose fixes eigenvalues
Apparently, LAPACK hates nonsymmetric tridiagonal matrices where the larger off-diagonal elements are below the diagonal. Using the transposed matrix, in which the larger elements are above the diagonal, results in real eigenvalues. (Theoretically, a matrix and its transpose have the same eigenvalues.) In addition to being real, they agree with the theoretical values up to sorting and reasonable errors.
a, b, c, n = -2, 0.5, 1.5, 100
k = np.zeros((n, n))
np.fill_diagonal(k, a)
np.fill_diagonal(k[:-1, 1:], b)
np.fill_diagonal(k[1:, :-1], c)
theory = a - 2*np.sqrt(b*c) * np.cos(np.pi * np.arange(1, n+1) / (n+1))
computed = np.sort(np.linalg.eig(k.T)[0])
print(np.max(np.abs(theory - computed)))
This prints 6.183001044490766e-08
, the largest discrepancy between the computed and theoretical eigenvalue. Without transposing T, this error goes up to 0.26.
Eigenvectors
You also wanted eigenvectors. The eigenvectors returned by np.eig
for the transposed matrix are the left eigenvectors for the original matrix: that is, they satisfy vec.dot(k) = lam*vec
instead of k.dot(vec) = lam*vec
. If you want to get the right eigenvectors for the original matrix, use SciPy's eig
:
from scipy import linalg as la
evals, right_evects, left_evects = (np.real(_) for _ in la.eig(k.T, left=True, right=True))
SciPy's eigensolver is different from NumPy's in that it returns the eigenvectors and eigenvalues with +0j
attached; it considers them complex but correctly evaluates the imaginary part to 0. I truncated that imaginary part above. Note that SciPy return order is "evals, left, right" but since k was transposed, I switched left and right around.
Let's check these eigenvectors:
np.max([np.linalg.norm(k.dot(vec) - ev*vec) for ev, vec in zip(evals, right_evects.T)])
returns 1.845213984555825e-14
, not bad. The transpose on the array with eigenvectors is because zip
picks rows from a matrix, and we need columns.
Bonus content
So... problem solved? Well, I didn't say I can diagonalize your matrix. Trying to invert the matrix formed by either left or right eigenvectors looks like a losing proposition; the inverse is horrible.
Also, we should not trust the above test for eigenvectors too much. It gave a tiny error for the right eigenvectors... let's try it on the wrong eigenvectors, those with nontrivial imaginary parts.
wrong_evals, wrong_evects = np.linalg.eig(k)
np.max([np.linalg.norm(k.dot(vec) - ev*vec) for ev, vec in zip(wrong_evals, wrong_evects.T)])
This returns 1.7136373586499598e-14
. Wrong eigenvectors are even better than the real thing!
0.5*1.5 = .75
) – Daniel F