6
votes

Using Scikit-learn (v 0.15.2) for non-negative matrix factorization on a large sparse matrix (less than 1% values > 0). I want to find factors by minimizing errors only on non-zero values of the matrix (i.e., do not calculate errors for entries that are zero), and to favor sparsity. I'm not sure if something's wrong with what I'm trying. The scikit-learn package's NMF and ProjectedGradientNMF have worked well for me before. But it seems that when the matrix size increases, the factorization is terribly slow.

I'm talking about matrices with > 10^10 cells. For matrix with ~10^7 cells, I find the executime time to be good.

The parameters I've used are as follows: nmf_model = NMF(n_components = 100, init='nndsvd', random_state=0, tol = 0.01, sparseness='data').

When I tried slightly different parameters (change to init=random), I get the following warning. After the warning, the execution of the script halts.

/lib/python2.7/site-packages/sklearn/decomposition/nmf.py:252: UserWarning: Iteration limit reached in nls subproblem.
  warnings.warn("Iteration limit reached in nls subproblem.")

Is there a way to make this faster and solve the above problem? I've tried using numpy sparse matrix (column- and row-sparse), but surprisingly - it's slower on the test I did with a smaller matrix (~10^7 cells).

Considering that one would have to run multiple iterations of such a factorization (to choose an ideal number of factors and k-fold cross validation), a faster way to solve this problem is highly desirable.

I'm also open to suggestions of package/tools that's not based on sklearn or Pyhon. I understand questions about package/tool choices are not encouraged, but for such a specific use-case, knowing what techniques others in the field use would be very helpful.

3

3 Answers

2
votes

Maybe a few words on what the initial problem is about could enable us to give better answers.

Matrix Factorization on a very large matrix is always going to be slow due to the nature of the problem.

Suggestions: Reducing n_components to < 20 will speed it up somewhat. However, the only real improvement in speed will be achieved by limiting the size of the matrix. With a matrix like you describe, one could think that you are trying to factorize a term frequency matrix. If this is so, you could try to use the vectorization functions in scikit-learn to limit the size of the matrix. Most of them have a max_features parameter. Example:

vectorizer = TfidfVectorizer(
    max_features=10000,
    ngram_range=(1,2))
tfidf = vectorizer.fit_transform(data)

This will significantly speed up the problem solving.

Should I be completely wrong and this is not a term frequency problem, I would still look into ways to limit the initial matrix you are trying to factorize.

2
votes

You might want to take a look at this article which discusses more recent techniques on NMF: http://www.cc.gatech.edu/~hpark/papers/nmf_blockpivot.pdf

The idea is to work only on the nonzero entries for factorization which reduces computational time especially when the matrix/matrices involved is/are very sparse.

Also, one of the authors from the same article created NMF implementations on github including the ones mentioned in their article. Here's the link: https://github.com/kimjingu/nonnegfac-python

Hope that helps.

0
votes

Old question, new answer.

The OP asks for "zero-masked" NMF, where zeros are treated as missing values. This will never be faster than normal NMF. Consider NMF by alternating least squares, here the left-hand side of systems of equations is generally constant (it is simply the tcrossprod of W or H), but in zero-masked NMF it needs to be re-calculated for every single sample or feature.

I've implemented zero-masked NMF in the RcppML R package. You can install it from CRAN and use the nmf function setting the mask_zeros argument to TRUE:

install.packages("RcppML")
A <- rsparsematrix(1000, 1000, 0.1) # simulate random matrix
model <- RcppML::nmf(A, k = 10, mask_zeros = TRUE)

My NMF implementation is faster than scikit-learn without masking zeros, and shouldn't be impossibly slow for 99% sparse matrices.