I am relatively new to the domain of topic modeling so I hope this isn't a stupid question.
I have a text corpus of 7k documents which are mostly relatively short (just a few words). As standard LDA produces only moderately good results, I want to include word vectors that are pre-trained on a large external corpus (like these: https://nlp.stanford.edu/projects/glove/). However, I haven't found anything that explains understandably how I should proceed (I found some information about the implementation in Python, but I need a solution for R). After downloading the pre-trained word vectors, how do I integrate them in the LDA modeling process for my own corpus?
Thanks a lot in advance!