I am trying to train a Doc2Vec model using gensim with 114M unique documents/labels and vocab size of around 3M unique words. I have 115GB Ram linux machine on Azure. When I run build_vocab, the iterator parses all files and then throws memory error as listed below.
Traceback (most recent call last):
File "doc_2_vec.py", line 63, in <module>
model.build_vocab(sentences.to_array())
File "/home/meghana/.local/lib/python2.7/site-packages/gensim/models/word2vec.py", line 579, in build_vocab
self.finalize_vocab(update=update) # build tables & arrays
File "/home/meghana/.local/lib/python2.7/site-packages/gensim/models/word2vec.py", line 752, in finalize_vocab
self.reset_weights()
File "/home/meghana/.local/lib/python2.7/site-packages/gensim/models/doc2vec.py", line 662, in reset_weights
self.docvecs.reset_weights(self)
File "/home/meghana/.local/lib/python2.7/site-packages/gensim/models/doc2vec.py", line 390, in reset_weights
self.doctag_syn0 = empty((length, model.vector_size), dtype=REAL)
MemoryError
My code-
import parquet
import json
import collections
import multiprocessing
# gensim modules
from gensim import utils
from gensim.models.doc2vec import LabeledSentence
from gensim.models import Doc2Vec
class LabeledLineSentence(object):
def __init__(self, sources):
self.sources = sources
flipped = {}
def __iter__(self):
for src in self.sources:
with open(src) as fo:
for row in parquet.DictReader(fo, columns=['Id','tokens']):
yield LabeledSentence(utils.to_unicode(row['tokens']).split('\x01'), [row['Id']])
## list of files to be open ##
sources = glob.glob("/data/meghana_home/data/*")
sentences = LabeledLineSentence(sources)
#pre = Doc2Vec(min_count=0)
#pre.scan_vocab(sentences)
"""
for num in range(0, 20):
print('min_count: {}, size of vocab: '.format(num), pre.scale_vocab(min_count=num, dry_run=True)['memory']['vocab']/700)
print("done")
"""
NUM_WORKERS = multiprocessing.cpu_count()
NUM_VECTORS = 300
model = Doc2Vec(alpha=0.025, min_alpha=0.0001,min_count=15, window=3, size=NUM_VECTORS, sample=1e-4, negative=10, workers=NUM_WORKERS)
model.build_vocab(sentences)
print("built vocab.......")
model.train(sentences,total_examples=model.corpus_count, epochs=10)
Memory usage as per top is-
Can someone please tell me how much is the expected memory? What is better option- Adding swap space and slow the process or add more memory so that cost of cluster might eventually be equivalent. What vectors gensim stores in memory? Any flag that i am missing for memory efficient usage.
