I was training my word2vec model (skip-gram) on a vocab size of 100 000. But at the time of testing I got few words which weren't in the vocab. To find their embeddings I tried 2 approaches:
Calculate minimum edit distance word from vocab and acquire its embedding.
Constructed different n-grams from the word and searched them in the vocab.
Despite of applying these methods, I am not able to get rid of out of vocab words problem completely.
Does word2vec take all n-grams of a word into account while training like fastText does?
Note - In fastText if our input word is quora then it considers all of its possible n-grams in the corpus.