1
votes

For this application, I would like to use an algorithm for dimensionality reduction such that a given number of components all explain about the same amount of variance in the data.

Principal Component Analysis is therefore not suited because the explained variance decreases sharply from the first principal component to each subsequent one.

What algorithms can I use?

1
Why would you want more components if it can be represented in fewer? That's the entire purpose of PCA and dimensionality in general.David Robinson
Incidentally, did you remember to center the rows of your data before performing PCA? That's a common reason that PCA might have the first component much more significant than all the others.David Robinson
For example, an autoencoder has the property that the components created by the central bottleneck all explain about the same variance. This has the advantage that additional feature selection can be sensibly performed. I want the dimensionality reduction only for feature extraction because it's for a genetic programming engine that's brilliant at feature selection. Can't answer the other question, I didn't yet implement somehting. BTW I don't want to use an autoencoder as it would require too much neural network expertise.Robert Schulz

1 Answers

1
votes

If you just don't like the variance ordering among PCs, you can pick up a number of PCs, then randomly rotate them somewhat. It is still interesting know how the extra ordering information negatively impacts your application.