3
votes

When we posted a homework assignment about PCA we told the course participants to pick any way of calculating the eigenvectors they found. They found multiple ways: eig, eigh (our favorite was svd). In a later task we told them to use the PCAs from scikit-learn - and were surprised that the results differed a lot more than we expected.

I toyed around a bit and we posted an explanation to the participants that either solution was correct and probably just suffered from numerical instabilities in the algorithms. However, recently I picked that file up again during a discussion with a co-worker and we quickly figured out that there's an interesting subtle change to make to get all results to be almost equivalent: Transpose the eigenvectors obtained from the SVD (and thus from the PCAs).

A bit of code to show this:

def pca_eig(data):
    """Uses numpy.linalg.eig to calculate the PCA."""
    data = data.T @ data
    val, vec = np.linalg.eig(data)
    return val, vec

versus

def pca_svd(data):
    """Uses numpy.linalg.svd to calculate the PCA."""
    u, s, v = np.linalg.svd(data)
    return s ** 2, v

Does not yield the same result. Changing the return of pca_svd to s ** 2, v.T, however, works! It makes perfect sense following the definition by wikipedia: The SVD of X follows X=UΣWT where

the right singular vectors W of X are equivalent to the eigenvectors of XTX

So to get the eigenvectors we need to transposed the output v of np.linalg.eig(...).

Unless there is something else going on? Anyway, the PCA and IncrementalPCA both show wrong results (or eig is wrong? I mean, transposing that yields the same equality), and looking at the code for PCA reveals that they are doing it as I did it initially:

U, S, V = linalg.svd(X, full_matrices=False)
# flip eigenvectors' sign to enforce deterministic output
U, V = svd_flip(U, V)

components_ = V

I created a little gist demonstrating the differences (nbviewer), the first with PCA and IncPCA as they are (also no transposition of the SVD), the second with transposed eigenvectors:

Comparison without transposition of SVD/PCAs (normalized data)

comparison without transposition of SVD/PCAs

Comparison with transposition of SVD/PCAs (normalized data)

comparison with transposition of SVD/PCAs

As one can clearly see, in the upper image the results are not really great, while the lower image only differs in some signs, thus mirroring the results here and there.

Is this really wrong and a bug in scikit-learn? More likely I am using the math wrong – but what is right? Can you please help me?

1

1 Answers

1
votes

If you look at the documentation, it's pretty clear from the shape that the eigenvectors are in the rows, not the columns. The point of the sklearn PCA is that you can use the transform method to do the correct transformation.