1
votes

I have a dataset of shape (10000, 128) (samples= 10,000, and features=128) where the class labels are binary. I want to use RNN for model training using Keras library. I wrote the following code:

tr_C, ts_C, tr_r, ts_r = train_test_split(C, r, train_size=.8)
batch_size = 32

print('Build STATEFUL model...')
model = Sequential()
model.add(LSTM(64, (batch_size, C.shape[0], C.shape[1]), return_sequences=False, stateful=True))

model.add(Dense(1, activation='sigmoid'))
model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy'])

print('Training...')
model.fit(tr_C, ts_r,
          batch_size=batch_size, epochs=1, shuffle=False,
          validation_data=(ts_C, ts_r))

But I get this error:

ValueError: Error when checking input: expected lstm_1_input to have 3 dimensions, but got array with shape (8000, 128)

I don't understand this error. How can I fix it? Thank you

2
Has your input sequential nature? Is this a set of sequences or a single sequence?Marcin Możejko
@MarcinMożejko Thanks for your reply. I want to treat each row in the dataset as a single sequence.Kristofer
So does it have a length of 128?Marcin Możejko
@MarcinMożejko Yes, each row is of length 128. I think I need to reshape somehow but I don't know how to do soKristofer

2 Answers

1
votes

You need to do following steps:

  1. Reshape C by:

    C = C.reshape((c.shape[0], c.shape[1], 1))
    
  2. Adjust LSTM layer:

    model.add(LSTM(64, (batch_size, C.shape[1], C.shape[2]), return_sequences=False, stateful=True))
    
0
votes

I had the same issue but when I tried to apply the same fix I have run into another error. I am however running on 5 gpus. I have read that you need to make sure that your samples are divisible by both the batch sive and number of gpus but I have done that. I have scoured the internet for days and am unable to find anything that has been able to fix the issue I am having. I am running keras v2.0.9 and tensor flow v1.1.0

VARIABLES: attributeTables[0] is a numpy array shape (35560, 700) y is a numpy array shape (35560, ) I have also tried using shape (35560, 1) for y but all that happens is the "Incompatible shapes: [2540] vs. [508]" changes from that to "Incompatible shapes: [2540, 1] vs. [508, 1]"

So this says to me that the issue is only with the targetsand that the expected batch size is getting multiplied somewhere in the middle of the process only for the targets and not for attributes causing a mismatch or at least only durring validation I'm not sure.

Here is the code and error in question.

import numpy as np
from keras.models import Sequential
from keras.utils import multi_gpu_model
from keras.layers import Dense
from keras.layers import LSTM
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt

def baseline_model(): 
    # create model
    print("Building Layers")
    model = Sequential()
    model.add(LSTM(700, batch_input_shape=(batchSize, X.shape[1], X.shape[2]), activation='tanh', return_sequences=False, stateful=True))
    model.add(Dense(1))
    print("Building Parallel model")
    parallel_model = multi_gpu_model(model, gpus=nGPU)
    # Compile model
    #model.compile(loss='mean_squared_error', optimizer='adam')
    print("Compiling Model")
    parallel_model.compile(loss='mae', optimizer='adam', metrics=['accuracy'])
    return parallel_model

def buildModel():
    print("Bulding Model")
    mlp = baseline_model()
    print("Fitting Model")
    return mlp.fit(X_train, y_train, epochs=1, batch_size=batchSize, shuffle=False, validation_data=(X_test, y_test))

print("Scaling")
scaler = StandardScaler()
X_Scaled = scaler.fit_transform(attributeTables[0])

print("Finding Batch Size")
nGPU = 5
batchSize = 500
while len(X_Scaled) % (batchSize * nGPU) != 0:
    batchSize += 1

print("Filling Arrays")
X = X_Scaled.reshape((X_Scaled.shape[0], X_Scaled.shape[1], 1))
X_train, X_test, y_train, y_test = train_test_split(X, y, train_size=.8)


print("Calling buildModel()")
model = buildModel()

print("Ploting History")
plt.plot(model.history['loss'], label='train')
plt.plot(model.history['val_loss'], label='test')
plt.legend()
plt.show()

Here is my complete output.

Beginning OHLC Load
Time took : 7.571000099182129

Making gloabal copies
Time took : 0.0

Using TensorFlow backend.
Scaling
Finding Batch Size
Filling Arrays
Calling buildModel()
Bulding Model
Building Layers
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\model_selection\_split.py:2010: FutureWarning: From version 0.21, test_size will always complement train_size unless both are specified.
  FutureWarning)
Building Parallel model
Compiling Model
Fitting Model
Train on 28448 samples, validate on 7112 samples
Epoch 1/1
Traceback (most recent call last):

  File "<ipython-input-2-74c49f05bfbc>", line 1, in <module>
    runfile('C:/Users/BeeAndTurtle/Documents/Programming/Python/Kraken_API_Market_Prediction/predictor/test.py', wdir='C:/Users/BeeAndTurtle/Documents/Programming/Python/Kraken_API_Market_Prediction/predictor')

  File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 710, in runfile
    execfile(filename, namespace)

  File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 101, in execfile
    exec(compile(f.read(), filename, 'exec'), namespace)

  File "C:/Users/BeeAndTurtle/Documents/Programming/Python/Kraken_API_Market_Prediction/predictor/test.py", line 77, in <module>
    model = buildModel()

  File "C:/Users/BeeAndTurtle/Documents/Programming/Python/Kraken_API_Market_Prediction/predictor/test.py", line 57, in buildModel
    return mlp.fit(X_train, y_train, epochs=1, batch_size=batchSize, shuffle=False, validation_data=(X_test, y_test))

  File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1631, in fit
    validation_steps=validation_steps)

  File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1213, in _fit_loop
    outs = f(ins_batch)

  File "C:\ProgramData\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py", line 2332, in __call__
    **self.session_kwargs)

  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 778, in run
    run_metadata_ptr)

  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 982, in _run
    feed_dict_string, options, run_metadata)

  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1032, in _do_run
    target_list, options, run_metadata)

  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\client\session.py", line 1052, in _do_call
    raise type(e)(node_def, op, message)

InvalidArgumentError: Incompatible shapes: [2540,1] vs. [508,1]
     [[Node: training/Adam/gradients/loss/concatenate_1_loss/sub_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _class=["loc:@loss/concatenate_1_loss/sub"], _device="/job:localhost/replica:0/task:0/gpu:0"](training/Adam/gradients/loss/concatenate_1_loss/sub_grad/Shape, training/Adam/gradients/loss/concatenate_1_loss/sub_grad/Shape_1)]]
     [[Node: replica_1/sequential_1/dense_1/BiasAdd/_313 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:1", send_device_incarnation=1, tensor_name="edge_1355_replica_1/sequential_1/dense_1/BiasAdd", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]

Caused by op 'training/Adam/gradients/loss/concatenate_1_loss/sub_grad/BroadcastGradientArgs', defined at:
  File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\ipython\start_kernel.py", line 245, in <module>
    main()
  File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\ipython\start_kernel.py", line 241, in main
    kernel.start()
  File "C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\kernelapp.py", line 477, in start
    ioloop.IOLoop.instance().start()
  File "C:\ProgramData\Anaconda3\lib\site-packages\zmq\eventloop\ioloop.py", line 177, in start
    super(ZMQIOLoop, self).start()
  File "C:\ProgramData\Anaconda3\lib\site-packages\tornado\ioloop.py", line 832, in start
    self._run_callback(self._callbacks.popleft())
  File "C:\ProgramData\Anaconda3\lib\site-packages\tornado\ioloop.py", line 605, in _run_callback
    ret = callback()
  File "C:\ProgramData\Anaconda3\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper
    return fn(*args, **kwargs)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 265, in enter_eventloop
    self.eventloop(self)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\eventloops.py", line 106, in loop_qt5
    return loop_qt4(kernel)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\eventloops.py", line 99, in loop_qt4
    _loop_qt(kernel.app)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\eventloops.py", line 83, in _loop_qt
    app.exec_()
  File "C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\eventloops.py", line 39, in process_stream_events
    kernel.do_one_iteration()
  File "C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 298, in do_one_iteration
    stream.flush(zmq.POLLIN, 1)
  File "C:\ProgramData\Anaconda3\lib\site-packages\zmq\eventloop\zmqstream.py", line 352, in flush
    self._handle_recv()
  File "C:\ProgramData\Anaconda3\lib\site-packages\zmq\eventloop\zmqstream.py", line 472, in _handle_recv
    self._run_callback(callback, msg)
  File "C:\ProgramData\Anaconda3\lib\site-packages\zmq\eventloop\zmqstream.py", line 414, in _run_callback
    callback(*args, **kwargs)
  File "C:\ProgramData\Anaconda3\lib\site-packages\tornado\stack_context.py", line 277, in null_wrapper
    return fn(*args, **kwargs)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 283, in dispatcher
    return self.dispatch_shell(stream, msg)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 235, in dispatch_shell
    handler(stream, idents, msg)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\kernelbase.py", line 399, in execute_request
    user_expressions, allow_stdin)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\ipkernel.py", line 196, in do_execute
    res = shell.run_cell(code, store_history=store_history, silent=silent)
  File "C:\ProgramData\Anaconda3\lib\site-packages\ipykernel\zmqshell.py", line 533, in run_cell
    return super(ZMQInteractiveShell, self).run_cell(*args, **kwargs)
  File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2698, in run_cell
    interactivity=interactivity, compiler=compiler, result=result)
  File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2808, in run_ast_nodes
    if self.run_code(code, result):
  File "C:\ProgramData\Anaconda3\lib\site-packages\IPython\core\interactiveshell.py", line 2862, in run_code
    exec(code_obj, self.user_global_ns, self.user_ns)
  File "<ipython-input-2-74c49f05bfbc>", line 1, in <module>
    runfile('C:/Users/BeeAndTurtle/Documents/Programming/Python/Kraken_API_Market_Prediction/predictor/test.py', wdir='C:/Users/BeeAndTurtle/Documents/Programming/Python/Kraken_API_Market_Prediction/predictor')
  File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 710, in runfile
    execfile(filename, namespace)
  File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\site\sitecustomize.py", line 101, in execfile
    exec(compile(f.read(), filename, 'exec'), namespace)
  File "C:/Users/BeeAndTurtle/Documents/Programming/Python/Kraken_API_Market_Prediction/predictor/test.py", line 77, in <module>
    model = buildModel()
  File "C:/Users/BeeAndTurtle/Documents/Programming/Python/Kraken_API_Market_Prediction/predictor/test.py", line 57, in buildModel
    return mlp.fit(X_train, y_train, epochs=1, batch_size=batchSize, shuffle=False, validation_data=(X_test, y_test))
  File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 1608, in fit
    self._make_train_function()
  File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 990, in _make_train_function
    loss=self.total_loss)
  File "C:\ProgramData\Anaconda3\lib\site-packages\keras\legacy\interfaces.py", line 87, in wrapper
    return func(*args, **kwargs)
  File "C:\ProgramData\Anaconda3\lib\site-packages\keras\optimizers.py", line 415, in get_updates
    grads = self.get_gradients(loss, params)
  File "C:\ProgramData\Anaconda3\lib\site-packages\keras\optimizers.py", line 73, in get_gradients
    grads = K.gradients(loss, params)
  File "C:\ProgramData\Anaconda3\lib\site-packages\keras\backend\tensorflow_backend.py", line 2369, in gradients
    return tf.gradients(loss, variables, colocate_gradients_with_ops=True)
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 560, in gradients
    grad_scope, op, func_call, lambda: grad_fn(op, *out_grads))
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 368, in _MaybeCompile
    return grad_fn()  # Exit early
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\gradients_impl.py", line 560, in <lambda>
    grad_scope, op, func_call, lambda: grad_fn(op, *out_grads))
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\math_grad.py", line 609, in _SubGrad
    rx, ry = gen_array_ops._broadcast_gradient_args(sx, sy)
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_array_ops.py", line 411, in _broadcast_gradient_args
    name=name)
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 768, in apply_op
    op_def=op_def)
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2336, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1228, in __init__
    self._traceback = _extract_stack()

...which was originally created as op 'loss/concatenate_1_loss/sub', defined at:
  File "C:\ProgramData\Anaconda3\lib\site-packages\spyder\utils\ipython\start_kernel.py", line 245, in <module>
    main()
[elided 27 identical lines from previous traceback]
  File "C:/Users/BeeAndTurtle/Documents/Programming/Python/Kraken_API_Market_Prediction/predictor/test.py", line 77, in <module>
    model = buildModel()
  File "C:/Users/BeeAndTurtle/Documents/Programming/Python/Kraken_API_Market_Prediction/predictor/test.py", line 55, in buildModel
    mlp = baseline_model()
  File "C:/Users/BeeAndTurtle/Documents/Programming/Python/Kraken_API_Market_Prediction/predictor/test.py", line 29, in baseline_model
    parallel_model.compile(loss='mae', optimizer='adam', metrics=['accuracy'])
  File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 860, in compile
    sample_weight, mask)
  File "C:\ProgramData\Anaconda3\lib\site-packages\keras\engine\training.py", line 460, in weighted
    score_array = fn(y_true, y_pred)
  File "C:\ProgramData\Anaconda3\lib\site-packages\keras\losses.py", line 13, in mean_absolute_error
    return K.mean(K.abs(y_pred - y_true), axis=-1)
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\math_ops.py", line 821, in binary_op_wrapper
    return func(x, y, name=name)
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\ops\gen_math_ops.py", line 2627, in _sub
    result = _op_def_lib.apply_op("Sub", x=x, y=y, name=name)
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\op_def_library.py", line 768, in apply_op
    op_def=op_def)
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 2336, in create_op
    original_op=self._default_original_op, op_def=op_def)
  File "C:\ProgramData\Anaconda3\lib\site-packages\tensorflow\python\framework\ops.py", line 1228, in __init__
    self._traceback = _extract_stack()

InvalidArgumentError (see above for traceback): Incompatible shapes: [2540,1] vs. [508,1]
     [[Node: training/Adam/gradients/loss/concatenate_1_loss/sub_grad/BroadcastGradientArgs = BroadcastGradientArgs[T=DT_INT32, _class=["loc:@loss/concatenate_1_loss/sub"], _device="/job:localhost/replica:0/task:0/gpu:0"](training/Adam/gradients/loss/concatenate_1_loss/sub_grad/Shape, training/Adam/gradients/loss/concatenate_1_loss/sub_grad/Shape_1)]]
     [[Node: replica_1/sequential_1/dense_1/BiasAdd/_313 = _Recv[client_terminated=false, recv_device="/job:localhost/replica:0/task:0/cpu:0", send_device="/job:localhost/replica:0/task:0/gpu:1", send_device_incarnation=1, tensor_name="edge_1355_replica_1/sequential_1/dense_1/BiasAdd", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/cpu:0"]()]]