I have seen this question, and it is relevant to my attempt to compute the dominant eigenvector in Python with numPy.
I am trying to compute the dominant eigenvector of an n x n matrix without having to get into too much heavy linear algebra. I did cursory research on determinants, eigenvalues, eigenvectors, and characteristic polynomials, but I would prefer to rely on the numPy implementation for finding eigenvalues as I believe it is more efficient than my own would be.
The problem I encountered was that I used this code:
markov = array([[0.8,0.2],[.1,.9]])
print eig(markov)
...as a test, and got this output:
(array([ 0.7, 1. ]), array([[-0.89442719, -0.70710678],
[ 0.4472136 , -0.70710678]]))
What concerns me about this is that by the Perron-Frobenius theorem, all of the components of the second eigenvector should be positive (since, according to Wikipedia, "a real square matrix with positive entries has a unique largest real eigenvalue and that the corresponding eigenvector has strictly positive components").
Anyone know what's going on here? Is numPy wrong? Have I found an inconsistency in ZFC? Or is it just me being a noob at linear algebra, Python, numPy, or some combination of the three?
Thanks for any help that you can provide. Also, this is my first SO question (I used to be active on cstheory.se though), so any advice on improving the clarity of my question would be appreciated, too.