I'm trying a very simple example for tensorflow RNN. In that example, I use dynamic rnn. The code is as follows:
data = tf.placeholder(tf.float32, [None, 10,1]) #Number of examples, number of input, dimension of each input
target = tf.placeholder(tf.float32, [None, 11])
num_hidden = 24
cell = tf.nn.rnn_cell.LSTMCell(num_hidden,state_is_tuple=True)
val, _ = tf.nn.dynamic_rnn(cell, data, dtype=tf.float32)
val = tf.transpose(val, [1, 0, 2])
last = tf.gather(val, int(val.get_shape()[0]) - 1)
weight = tf.Variable(tf.truncated_normal([num_hidden, int(target.get_shape()[1])]))
bias = tf.Variable(tf.constant(0.1, shape=[target.get_shape()[1]]))
prediction = tf.nn.softmax(tf.matmul(last, weight) + bias)
cross_entropy = -tf.reduce_sum(target * tf.log(tf.clip_by_value(prediction,1e-10,1.0)))
optimizer = tf.train.AdamOptimizer()
minimize = optimizer.minimize(cross_entropy)
mistakes = tf.not_equal(tf.argmax(target, 1), tf.argmax(prediction, 1))
error = tf.reduce_mean(tf.cast(mistakes, tf.float32))
Actually, the code is taken from this tutorial.
The input to this RNN network is a sequence of binary numbers. Each number is put into an array. For example, a seuquence has format: [[1],[0],[0],[1],[1],[0],[1],[1],[1],[0]]
The shape of the input is [None,10,1] which are batch size, sequence size and embedding size, respectively. Now because dynamic rnn can accept variable input shape, I change the code as follows:
data = tf.placeholder(tf.float32, [None, None,1])
Basically, I want to use variable-length sequences (of course same length for all sequences in the same batch, but different between batches). However, it throws the error:
Traceback (most recent call last):
File "rnn-lstm-variable-length.py", line 48, in <module>
last = tf.gather(val, int(val.get_shape()[0]) - 1)
TypeError: __int__ returned non-int (type NoneType)
I understand that the second dimension is None
, which cannot be used in get_shape()[0]
. However, I believe that there must be a way to overcome this because RNN accepts variable lenth inputs, in general.
How can I do it?
sequence = tf.placeholder(tf.float32, [None, max_length, frame_size])
, which means sequence length is fixed. – lenhhoxung