I am trying to understand how LSTM is used to classify text sentences (word sequences) consists of pre-trained word embeddings. I am reading through some posts about lstm and I am confused about the detailed procedure:
IMDB classification using LSTM on keras: https://machinelearningmastery.com/sequence-classification-lstm-recurrent-neural-networks-python-keras/ Colah's explanation on LSTM: http://colah.github.io/posts/2015-08-Understanding-LSTMs/
Say for example, I want to use lstm to classify movie reviews, each review has fixed length of 500 words. And I am using pre-trained word embeddings (from fasttext) that gives 100-dimension vector for each word. What will be the dimensions of Xt to feed into the LSTM? And how is the LSTM trained? If each Xt is a 100-dimension vector represent one word in a review, do I feed each word in a review to a LSTM at a time? What will LSTM do in each epoch? I am really confused...
# LSTM for sequence classification in the IMDB dataset
import numpy
from keras.datasets import imdb
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.layers.embeddings import Embedding
from keras.preprocessing import sequence
# fix random seed for reproducibility
numpy.random.seed(7)
# load the dataset but only keep the top n words, zero the rest
top_words = 5000
(X_train, y_train), (X_test, y_test) = imdb.load_data(num_words=top_words)
# truncate and pad input sequences
max_review_length = 500
X_train = sequence.pad_sequences(X_train, maxlen=max_review_length)
X_test = sequence.pad_sequences(X_test, maxlen=max_review_length)
# create the model
embedding_vecor_length = 32
model = Sequential()
model.add(Embedding(top_words, embedding_vecor_length, input_length=max_review_length))
model.add(LSTM(100))
model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])
print(model.summary())
model.fit(X_train, y_train, epochs=3, batch_size=64)
# Final evaluation of the model
scores = model.evaluate(X_test, y_test, verbose=0)
print("Accuracy: %.2f%%" % (scores[1]*100))
In the above code example (taken from Jason Brownlee's blog https://machinelearningmastery.com/sequence-classification-lstm-recurrent-neural-networks-python-keras/), a LSTM of 100 cells/neurons is used. How is the 100 neurons interconnected? Why can't I just use 1 cell in the figure above for classification since it is a recurrent manner so it feeds the output back to itself in the next timestamp? Any visualization graphs will be welcome.
Thanks!!