2
votes

I am trying to make a RNN classifier that has 3 different time series with 3 dimensions each as input and the time series can have different lengths. So to solve that, I modeled 3 RNNs and connected them in the final layer.

However, I am getting the following error message:

ValueError: Variable rnn/multi_rnn_cell/cell_0/basic_lstm_cell/kernel already exists, disallowed. Did you mean to set reuse=True in VarScope?

timeSeries = ['outbound', 'rest', 'return']
n_steps = {
    'outbound': 3159,
    'rest': 3603,
    'return': 3226
}
n_inputs = 3
n_neurons = 20
n_outputs = 2
n_layers = 1

learning_rate = 0.001


y = tf.placeholder(tf.int32, [None], name="y")
X = {}
seq_length = {}
for timeSeriesName in timeSeries:
    with tf.name_scope(timeSeriesName + "_placeholders") as scope:
        X[timeSeriesName] = tf.placeholder(tf.float32, [None, n_steps[timeSeriesName], n_inputs])
        seq_length[timeSeriesName] = tf.placeholder(tf.int32, [None])


outputs = {}
states = {}
top_layer_h_state = {}
lstm_cells = {}
multi_cell = {}
finalRNNlayers = []
for timeSeriesName in timeSeries:
    with tf.name_scope(timeSeriesName) as scope:
        lstm_cells[timeSeriesName] = [tf.contrib.rnn.BasicLSTMCell(num_units=n_neurons)
                                      for layer in range(n_layers)]
        multi_cell[timeSeriesName] = tf.contrib.rnn.MultiRNNCell(lstm_cells[timeSeriesName])
        outputs[timeSeriesName], states[timeSeriesName] = tf.nn.dynamic_rnn(
            multi_cell[timeSeriesName], X[timeSeriesName], dtype=tf.float32,
            sequence_length=seq_length[timeSeriesName])
        top_layer_h_state[timeSeriesName] = states[timeSeriesName][-1][1]
        finalRNNlayers.append(top_layer_h_state[timeSeriesName])

with tf.name_scope("3Stages_mixed") as scope:
    concat3_top_layer_h_states = tf.concat(finalRNNlayers, axis=1)
    logits = tf.layers.dense(concat3_top_layer_h_states, n_outputs, name="softmax")

I want each time series to have independent LSTM cells with their own weights each, so the reuse is not an option, how should this error be fixed?

The full traceback of the error can be found here.

1

1 Answers

2
votes

Change tf.name_scope(timeSeriesName) to tf.variable_scope(timeSeriesName). The difference between tf.name_scope and tf.variable_scope is discussed in this quesion. In your case, what's important is that tf.get_variable ignores the name scopes and LSTM cell parameters are created exactly with tf.get_variable.

Sample code to see the difference:

import tensorflow as tf

state = tf.zeros([32, 6])

input1 = tf.placeholder(tf.float32, [32, 10])
input2 = tf.placeholder(tf.float32, [32, 10])

# Works ok:
with tf.variable_scope('scope-1'):
  tf.nn.rnn_cell.BasicLSTMCell(3, state_is_tuple=False)(input1, state)
with tf.variable_scope('scope-2'):
  tf.nn.rnn_cell.BasicLSTMCell(3, state_is_tuple=False)(input2, state)

# Fails:
with tf.name_scope('name-1'):
  tf.nn.rnn_cell.BasicLSTMCell(3, state_is_tuple=False)(input1, state)
with tf.name_scope('name-2'):
  tf.nn.rnn_cell.BasicLSTMCell(3, state_is_tuple=False)(input2, state)