0
votes

For my NLP project I used CountVectorizer to Extract Features from a dataset using vectorizer = CountVectorizer(stop_words='english') and all_features = vectorizer.fit_transform(data.Text) and i also wrote a Simple RNN model using keras but I am not sure how to do the padding and the tokeniser step and get the data be trained on the model.

my code for RNN is:

model.add(keras.layers.recurrent.SimpleRNN(units = 1000, activation='relu',
use_bias=True))
model.add(keras.layers.Dense(units=1000, input_dim = 2000, activation='sigmoid'))
model.add(keras.layers.Dense(units=500, input_dim=1000, activation='relu'))
model.add(keras.layers.Dense(units=2, input_dim=500,activation='softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])

return model

can someone please give me some advice on this?

Thank you

1
will you show your rnn code? did you create a pipelineGolden Lion
i didn't create pipeline. I created a simple RNN with some dense layers.tamilini
please show the codeGolden Lion
model.add(keras.layers.recurrent.SimpleRNN(units = 1000, activation='relu', use_bias=True)) model.add(keras.layers.Dense(units=1000, input_dim = 2000, activation='sigmoid')) model.add(keras.layers.Dense(units=500, input_dim=1000, activation='relu')) model.add(keras.layers.Dense(units=2, input_dim=500,activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])tamilini

1 Answers

0
votes

add ensemble - you don't count vectorize, you use ensemble

https://github.com/dnishimoto/python-deep-learning/blob/master/UFO%20.ipynb

 docs=ufo_df["summary"] #text
 LABELS=['Egg', 'Cross','Sphere', 'Triangle','Disk','Oval','Rectangle','Teardrop']
 #LABELS=['Triangle']
 target=ufo_df[LABELS]
 #print([len(d) for d in docs])
 encoded_docs=[one_hot(d,vocab_size) for d in docs]
 #print([np.max(d) for d in encoded_docs])
 padded_docs = pad_sequences(encoded_docs,   maxlen=max_length, padding='post')

 #print([d for d in padded_docs])
 model=Sequential()
 model.add(Embedding(vocab_size, 8,   input_length=max_length))
 model.add(Flatten())
 model.add(Dense(8, activation='softmax'))
#model.compile(optimizer='adam', loss='categorical_crossentropy', metrics=['accuracy'])
 model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])

 model.fit(padded_docs, target, epochs=50, verbose=0)