Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Machine Learning by (33.1k points)

I am doing a text classification task with R, and I obtain a document-term matrix with size 22490 by 120,000 (only 4 million non-zero entries, less than 1% entries). Now I want to reduce the dimensionality by utilizing PCA (Principal Component Analysis). Unfortunately, R cannot handle this huge matrix, so I store this sparse matrix in a file in the "Matrix Market Format", hoping to use some other techniques to do PCA.

So could anyone give me some hints for useful libraries (whatever the programming language), which could do PCA with this large-scale matrix with ease, or do a longhand PCA by myself, in other words, calculate the covariance matrix at first, and then calculate the eigenvalues and eigenvectors for the covariance matrix?

What I want is to calculate all PCs (120,000), and choose only the top N PCs, who accounts for 90% variance. Obviously, in this case, I have to give a threshold a priori to set some very tiny variance values to 0 (in the covariance matrix), otherwise, the covariance matrix will not be sparse and its size would be 120,000 by 120,000, which is impossible to handle with one single machine. Also, the loadings (eigenvectors) will be extremely large, and should be stored in sparse format.

Thanks very much for any help !

1 Answer

0 votes
by (33.1k points)

The Scikit learn library in Python has a few PCA varients. A RandomizedPCA can handle sparse matrices in any of the formats supported by 

scipy.sparse. Scipy.io.mmread 

That should be able to parse the Matrix Market format. For more study Python Course and Sk Learn.

Hope this answer helps you!

Browse Categories

...