Back

Explore Courses Blog Tutorials Interview Questions
+1 vote
2 views
in Machine Learning by (7.3k points)

I am trying to run a PCA on a matrix of dimensions m x n where m is the number of features and n the number of samples.

Suppose I want to preserve the no features with the maximum variance. With scikit-learn I am able to do it in this way:

from sklearn.decomposition import PCA 

nf = 100 

pca = PCA(n_components=nf) 

# X is the matrix transposed (n samples on the rows, m features on the columns) 

pca.fit(X) 

X_new = pca.transform(X)

Now, I get a new matrix X_new that has a shape of n x nf. Is it possible to know which features have been discarded or the retained ones?

1 Answer

+1 vote
by (33.1k points)
edited by

Principle Component Analysis (PCA) is a dimensionality reduction technique. It is used to remove less useful (less correlated) features from the dataset. It is more useful in unsupervised machine learning, where we work on unlabelled data.

The features that PCA object has determined during fitting are in pca.components_. The vector space orthogonal to the one spanned by pca.components_ is discarded.

PCA does not "discard" or "retain" any of your pre-defined features (encoded by the columns you specify). It mixes all of them (by weighted sums) to find orthogonal directions of maximum variance.

If this is not the behavior you are looking for, then PCA dimensionality reduction is not the way to go. For some simple general feature selection methods, you can take a look at sklearn.feature_selection.

For example:

# Principal Component Analysis

from numpy import array

from sklearn.decomposition import PCA

# define a matrix

A = array([[1, 2], [3, 4], [5, 6]])

print(A)

# create the PCA instance

pca = PCA(2)

# fit on data

pca.fit(A)

# access values and vectors

print(pca.components_)

print(pca.explained_variance_)

# transform data

B = pca.transform(A)

print(B)

Hope this answer helps.

If you want to know more about Machine Learning then watch this video:

Browse Categories

...