1
votes

I am exploring deep learning methods especially LSTM to predict next word. Suppose, My data set is like this: Each data point consists of 7 features (7 different words)(A-G here) of different length.

 Group1  Group2............ Group 38
   A        B                   F
   E        C                   A
   B        E                   G
   C        D                   G
   C        F                   F
   D        G                   G
   .        .                   .
   .        .                   . 

I used one hot encoding as an Input layer. Here is the model

main_input= Input(shape=(None,action_count),name='main_input')
lstm_out= LSTM(units=64,activation='tanh')(main_input)
lstm_out=Dropout(0.2)(lstm_out)
lstm_out=Dense(action_count)(lstm_out)
main_output=Activation('softmax')(lstm_out)
model=Model(inputs=[main_input],outputs=main_output)
print(model.summary())

Using this model. I got an accuracy of about 60%. My question is how can I use embedding layer for my problem. Actually, I do not know much about embedding (why, when and how it works)[I only know one hot vector does not carry much information]. I am wondering if embedding can improve accuracy. If someone can provide me guidance in these regards, it will be greatly beneficial for me. (At least whether uses of embedding is logical or not for my case)

1

1 Answers

1
votes

What are Embedding layers?

They are layers which converts positive integers ( maybe word counts ) into fixed size dense vectors. They learn the so called embeddings for a particular text dataset ( in NLP tasks ).

Why are they useful?

Embedding layers slowly learn the relationships between words. Hence, if you have a large enough corpus ( which probably contains all possible English words ), then vectors for words like "king" and "queen" will show some similarity in the mutidimensional space of the embedding.

How are used in Keras?

The keras.layers.Embedding has the following configurations:

keras.layers.Embedding(input_dim, output_dim, embeddings_initializer='uniform', embeddings_regularizer=None, activity_regularizer=None, embeddings_constraint=None, mask_zero=False, input_length=None) 

Turns positive integers (indexes) into dense vectors of fixed size. eg. [[4], [20]] -> [[0.25, 0.1], [0.6, -0.2]] This layer can only be used as the first layer in a model.

When the input_dim is the vocabulary size + 1. Vocabulary is the corpus of all the words used in the dataset. The input_length is the length of the input sequences whereas output_dim is the dimensionality of the output vectors ( the dimensions for the vector of a particular word ).

The layer can also be used wih pretrained word embeddings like Word2Vec or GloVE.

Are they suitable for my use case?

Absolutely, yes. For sentiment analysis, if we could generate a context ( embedding ) for a particular word then we could definitely increase its efficiency.

How can I use them in my use case?

Follow the steps:

  1. You need to tokenize the sentences. Maybe with keras.preprocessing.text.Tokenizer.
  2. Pad the sequences to a fixed length using keras.preprocessing.sequence.pad_sequences. This will be the input_length parameter for the Embedding layer.
  3. Initialize the model with Embedding layer as the first layer.

Hope this helps.