0
votes

I am working on a recommendation engine and one problem I am facing right now is the similarity matrix of items are huge.

I calculated similarity matrix of 20,000 items and stored them a a binary file which tuned out to be nearly 1 GB. I think it is too big.

what is the best way do deal with similarity matrix if you have that many items?

Any advice!

1
Can you provide some more details? What are the contents of the file? How does your matrix look like? - Anand Undavia

1 Answers

1
votes

In fact similarity matrix is about how object similar to another objects. Each row consist of neighbors of object(row id), but you don't need to store all of neighbors, store for example only 20 neighbors. Use lil_matrix: from scipy.sparse import lil_matrix