I have a question concerning unseen samples which I want to qualify (face or not for). Using the ordinary Eigenface method (that is not reproducing kernel substituting the inner product of the PCA), the evaluation is done by projecting the sample onto the Eigenvectors from PCA on the trainset matrix and finally testing the minimal distance of the projection to the eigenvectors against a threshold.
I scrawled through several publications discussing the KPCA approach, but when it comes to the final step of testing unseen samples,I ran into a tiny, unanswered problem:
Using the ordinary PCA, the mean of the training set is substracted from the testvector before projection onto the Eigenvectors. Not so for the KPCA. I guess the problem here is that do not have acces to points in the kernel space, just to distances. Hence, we have no "mean". However, isn't this at least worth discussing?
Thanks for opinions and suggestions, as I think this is some kind of inaccuracy not mentioned so far.