3
votes

I want to predict the trajectory of a ball falling. That trajectory is parabolic. I know that LSTM may be too much for this (i.e. a simpler method could suffice). I thought that we can do this with 2 LSTM layers and a Dense layer at the end.

The end result that I want is to give the model 3 heights h0,h1,h2 and let it predict h3. Then, I want to give it h1, h2, and the h3 it outputted previously to predict h4, and so on, until I can predict the whole trajectory.

Firstly, what would the input shape be for the first LSTM layer ? Would it be input_shape = (3,1) ? Secondly, would the LSTM be able to predict a parabolic path ?

I am getting almost a flat line, not a parabola, and I want to rule out the possibility that I am misunderstanding how to feed and shape input.

Thank you

1

1 Answers

2
votes

The input shape is in the form (samples, timeSteps, features).

Your only feature is "height", so features = 1.
And since you're going to input sequences with different lengths, you can use timeSteps = None.

So, your input_shape could be (None, 1).
Since we're going to use a stateful=True layer below, we can use batch_input_shape=(1,None,1). Choose the amount of "samples" you want.

Your model can predict the trajectory indeed, but maybe it will need more than one layer. (The exact answer about how many layers and cells depend on knowing how the match inside LSTM works).

Training:

Now, first you need to train your network (only then it will be able to start predicting good things).

For training, suppose you have a sequence of [h1,h2,h3,h4,h5,h6...], true values in the correct sequence. (I suggest you have actually many sequences (samples), so your model learns better).

For this sequence, you want an output predicting the next step, then your target would be [h2,h3,h4,h5,h6,h7...]

So, suppose you have a data array with shape (manySequences, steps, 1), you make:

x_train = data[:,:-1,:]    
y_train = data[:,1:,:]

Now, your layers should be using return_sequences=True. (Every input step produces an output step). And you train the model with this data.

A this point, whether you're using stateful=True or stateful=False is not very relevant. (But if true, you always need model.reset_state() before every single epoch and sequence)

Predicting:

For predicting, you can use stateful=True in the model. This means that when you input h1, it will produce h2. And when you input h2 it will remember the "current speed" (the state of the model) to predict the correct h3.

(In the training phase, it's not important to have this, because you're inputting the entire sequences at once. So the speed will be understood between steps of the long sequences).

You can se the method reset_states() as set_current_speed_to(0). You will use it whenever the step you're going to input is the first step in a sequence.

Then you can do loops like this:

model.reset_states() #make speed = 0
nextH = someValueWithShape((1,1,1))  

predictions = [nextH]

for i in range(steps):
    nextH = model.predict(nextH)    
    predictions.append(nextH)

There is an example here, but using two features. There is a difference that I use two models, one for training, one for predicting, but you can use only one with return_sequences=True and stateful=True (don't forget to reset_states() at the beginning of every epoch in training).