10
votes

This is more of a deep learning conceptual problem, and if this is not the right platform I'll take it elsewhere.

I'm trying to use a Keras LSTM sequential model to learn sequences of text and map them to a numeric value (a regression problem).

The thing is, the learning always converges too fast on high loss (both training and testing). I've tried all possible hyperparameters, and I have a feeling it's a local minima issue that causes the model's high bias.

My questions are basically :

  1. How to initialize weights and bias given this problem?
  2. Which optimizer to use?
  3. How deep I should extend the network (I'm afraid that if I use a very deep network, the training time will be unbearable and the model variance will grow)
  4. Should I add more training data?

Input and output are normalized with minmax.

I am using SGD with momentum, currently 3 LSTM layers (126,256,128) and 2 dense layers (200 and 1 output neuron)

I have printed the weights after few epochs and noticed that many weights are zero and the rest are basically have the value of 1 (or very close to it).

Here are some plots from tensorboard :enter image description here

2
I like using the 'adam' optimizer, it often finds its way automatically. But your answer cannot be given without many tests and details. It seems your learning rate may be too high, but that may not be the only possible cause. - Daniel Möller
What is your activation function? - Marcin Możejko

2 Answers

13
votes

Faster convergence with a very high loss could possibly mean you are facing an exploding gradients problem. Try to use a much lower learning rate like 1e-5 or 1e-6. You can also try techniques like gradient clipping to limit your gradients in case of high learning rates.

Answer 1

Another reason could be initialization of weights, try the below 3 methods:

  1. Method described in this paper https://arxiv.org/abs/1502.01852
  2. Xavier initialization
  3. Random initialization

For many cases 1st initialization method works the best.

Answer 2

You can try different optimizers like

  1. Momentum optimizer
  2. SGD or Gradient descent
  3. Adam optimizer

The choice of your optimizer should be based on the choice of your loss function. For example: for a logistic regression problem with MSE as a loss function, gradient based optimizers will not converge.

Answer 3

How deep or wide your network should be is again fully dependent on which type of network you are using and what the problem is.

As you said you are using a sequential model using LSTM, to learn sequence on text. No doubt your choice of model is good for this problem you can also try 4-5 LSTMs.

Answer 4

If your gradients are going either 0 or infinite, it is called vanishing gradients or it simply means early convergence, try gradient clipping with proper learning rate and the first weight initialization technique.

I am sure this will definitely solve your problem.

0
votes

Consider reducing your batch_size. With large batch_size, it could be that your gradient at some point couldn't find any more variation in your data's stochasticity and for that reason it convergences earlier.