3
votes

I'm following a tutorial on Recurrent neural networks, and I am training a RNN to learn how to predict the next letter from the alphabet, given a sequence of letters. The problem is, my RAM-usage is slowly going up every epoch I train the network for. I can not finish training this network because I have "only" 8192MB of RAM-memory, and it is exhausted after +- 100 epochs. Why is this? I think it has something to do with the way LSTM's work, since they do keep some information in memory, but it would be nice if someone could explain me some more details.

The code I'm using is relatively simple, and completely self-contained (You can copy/paste and run it, no need for a external dataset since the dataset is just the alphabet). Therefore I included it in full, so the problem is easily reproducible.

The tensorflow version I am using is 1.14.

import numpy as np
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import LSTM
from keras.utils import np_utils
from keras_preprocessing.sequence import pad_sequences
np.random.seed(7)

# define the raw dataset
alphabet = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"

# create mapping of characters to integers (0-25) and the reverse
char_to_int = dict((c, i) for i, c in enumerate(alphabet))
int_to_char = dict((i, c) for i, c in enumerate(alphabet))

num_inputs = 1000
max_len = 5
dataX = []
dataY = []
for i in range(num_inputs):
    start = np.random.randint(len(alphabet)-2)
    end = np.random.randint(start, min(start+max_len,len(alphabet)-1))
    sequence_in = alphabet[start:end+1]
    sequence_out = alphabet[end + 1]
    dataX.append([char_to_int[char] for char in sequence_in])
    dataY.append(char_to_int[sequence_out])
    print(sequence_in, "->" , sequence_out)

#Pad sequences with 0's, reshape X, then normalize data
X = pad_sequences(dataX, maxlen=max_len, dtype= "float32" )
X = np.reshape(X, (X.shape[0], max_len, 1))
X = X / float(len(alphabet))
print(X.shape)

#OHE the output variable.
y = np_utils.to_categorical(dataY)

#Create & fit the model
batch_size=1
model = Sequential()
model.add(LSTM(32, input_shape=(X.shape[1], 1)))
model.add(Dense(y.shape[1], activation= "softmax" ))
model.compile(loss= "categorical_crossentropy" , optimizer= "adam" , metrics=[ "accuracy" ])
model.fit(X, y, epochs=500, batch_size=batch_size, verbose=2)
1
Which TF version is this?thushv89
The version is 1.14.0Psychotechnopath
I would also ask for the version. This code doesn't look bad at all.Daniel Möller
I had the exact same problem in TF 1.10, using the estimator API. After I've opened an issue and couldn't get it solved I just gave up, never had this problem again... I would try the same again in another environment - TensorFlow's response did mention to check if the TF version is working on the right CUDA. 1.14 is CUDA 10, are you using it? tensorflow.org/install/source#gpubluesummers
nicholas-leonard suggest in this issue (github.com/Element-Research/rnn/issues/5) that "the memory limit could possibly be located in the LSTM code as it maintains a state (which is a table stored in a table) for each step". Take also a look at Marcin Możejko´s answer (stackoverflow.com/questions/41731743/…) with two suggestions that you could use in order to decrease amount of memory.kevin

1 Answers

0
votes

The problem is that your sequences are rather long (1000 consecutive inputs). As LSTM-units do maintain some kind of state over epochs and you are trying to train it for 500 epochs (Which is a lot), especially when you're training on a CPU, your RAM will get flooded over time. I suggest you try to train on GPU, which has dedicated memory of its own. Also check out this issue: https://github.com/Element-Research/rnn/issues/5