0
votes

I am working on character recognition using convolutional neural networks. I have 9 layer model and 19990 training data and 4470 test data. But when I am using keras with Tensorflow backend. When I try to train the model, it runs extremely slow, like 100-200 samples per minute. I tried adding batch normalization layer after flattening, using regularization, adding dropout layers, using fit_generator to load data from disk batch wise so that ram remain free(that did the worse performance) using different batch sizes, but nothing is working. So, I tried reducing network size to 4 layers and added more channels to initial layers to increase parallel computing but now I started getting memory allocation errors. It says allocation of some address exceeds 10% and than my entire system freezes. I have to restart my laptop every time. I tried going back to the earlier version with 9 layers but that is giving me same error as well now, even though it worked earlier(not really worked, but atleast started training). So, what is the solution for this problem? Is it the problem of hardware being less capable or something else? I have 8gb ram and 2 gb gpu, but i dont use gpu for training. I have intel i5 7gen processor.

My model code:

model = Sequential()

#First conv layer
model.add(Conv2D(512,(3,3),padding="same",kernel_initializer="glorot_normal",data_format="channels_last",input_shape=(278,278,1),kernel_regularizer=l1(0.04),activity_regularizer=l2(0.05)))
model.add(LeakyReLU())
model.add(MaxPool2D(pool_size=(2,2),padding="same",data_format="channels_last"))
model.add(Dropout(0.2))

#Second conv layer
model.add(Conv2D(256,(4,4),padding="same",kernel_initializer="glorot_normal",data_format="channels_last",kernel_regularizer=l1(0.02),activity_regularizer=l1(0.04)))
model.add(LeakyReLU())
model.add(MaxPool2D(pool_size=(2,2),strides=2,padding="same",data_format="channels_last"))
model.add(Dropout(0.2))


#Third conv layer
model.add(Conv2D(64,(3,3),padding="same",kernel_initializer="glorot_normal",data_format="channels_last",bias_regularizer=l1_l2(l1=0.02,l2=0.02),activity_regularizer=l2(0.04)))
model.add(LeakyReLU())
model.add(MaxPool2D(pool_size=(2,2),padding="same",data_format="channels_last"))


#Fourth conv layer
model.add(Conv2D(512,(3,3),padding="same",kernel_initializer="glorot_normal",data_format="channels_last",kernel_regularizer=l2(0.04),bias_regularizer=l1(0.02),activity_regularizer=l1_l2(l1=0.04,l2=0.04)))
model.add(LeakyReLU())
model.add(MaxPool2D(pool_size=(2,2),padding="same",data_format="channels_last"))
model.add(Dropout(0.1))


#Fifth conv layer
#model.add(Conv2D(64,(3,3),padding="same",kernel_initializer="glorot_normal",data_format="channels_last"))
# model.add(LeakyReLU())
# model.add(MaxPool2D(pool_size=(2,2),strides=2,padding="same",data_format="channels_last"))

#Sixth conv layer
#model.add(Conv2D(256,(3,3),padding="same",kernel_initializer="glorot_normal",data_format="channels_last"))
#model.add(LeakyReLU())
#model.add(MaxPool2D(pool_size=(2,2),strides=2,padding="same",data_format="channels_last"))
#model.add(Dropout(0.2))


#Seventh conv layer
#model.add(Conv2D(64,(1,1),padding="same",kernel_initializer="glorot_normal",data_format="channels_last"))
 #model.add(LeakyReLU())
 #model.add(Dropout(0.1))


#Eighth conv layer
#model.add(Conv2D(1024,(3,3),padding="same",kernel_initializer="glorot_normal",data_format="channels_last"))
#model.add(LeakyReLU())
#model.add(MaxPool2D(pool_size=(2,2),strides=2,padding="same",data_format="channels_last"))

#Ninth conv layer
#model.add(Conv2D(425,(1,1),padding="same",kernel_initializer="glorot_normal",data_format="channels_last"))
#model.add(LeakyReLU())
# model.add(MaxPool2D(pool_size=(2,2),strides=2,padding="same",data_format="channels_last"))


#Flatten
model.add(Flatten())

#Batch normalization
model.add(BatchNormalization(axis=1))

#Fullyconnected
model.add(Dense(27,activation="softmax"))



#Compile model
adm = Adam(lr=0.2,decay=0.0002)
model.compile(optimizer=adm,loss="categorical_crossentropy",metrics=['accuracy'])
#train_generator = Generator("dataset.h5",batch_size)
#test_generator = Generator("test_dataset.h5",batch_size)
history = model.fit_generator(generator = train_generator,epochs = epochs,steps_per_epoch=19990/batch_size,validation_data=test_generator,validation_steps=4470/batch_size)

My data loading method:

def Generator(hdf5_file,batch_size):
X = HDF5Matrix(hdf5_file,"/Data/X")
Y = HDF5Matrix(hdf5_file,"/Data/Y")

size = X.end
idx = 0

while True:
    last_batch = idx+batch_size >size
    end = idx + batch_size if not last_batch else size
    yield X[idx:end],Y[idx:end]
    idx = end if not last_batch else 0
2
Any code here ? - pouyan
@pooyan which code should I post? My model code and data loading code are separate. I thought of posting data loading code but it seems irrelevant. - Shantanu Shinde
I'm not sure but it seems that there is a problem in your data loading. But it would be help full if you can share both of your model and data loading code. - pouyan
@pooyan added it - Shantanu Shinde
What is the size of each input data? And also what is the value of your batch_size? - pouyan

2 Answers

0
votes

I think (At least) one of your problem is that you are loading your entire data set into the ram. Your data set(training and validation) seems to be at least 5 GB. and in your generators you load them all. So in your case it seems to face with problem during training because of 8Gb ram.

0
votes

I got the problem. I had too many parameters in the model. I tried reducing the no. of channels and it worked. I thought of it because I was getting the error even for small dataset.