1
votes

I am trying to run two RNNs at the same time and concatenate their outputs together, by defining two variable scopes for each rnn_cell.LSTMCell. Why am I receiving this Variable Already Exists error??

ValueError: Variable hidden/RNN/LSTMCell/W_0 already exists, disallowed. Did you mean to set reuse=True in VarScope?

Why is it "hidden/RNN/LSTMCell/W_0" not "hidden/forward_lstm_cell/RNN/LSTMCell/W_0"???

    with tf.variable_scope('hidden', reuse=reuse): #reuse=None during training
        with tf.variable_scope('forward_lstm_cell'):
            lstm_fw_cell = tf.nn.rnn_cell.LSTMCell(num_units=self.num_hidden, use_peepholes=False, 
                                                initializer=tf.random_uniform_initializer(-0.003, 0.003),
                                                state_is_tuple=True)
            if not reuse:
                lstm_fw_cell = tf.nn.rnn_cell.DropoutWrapper(cell=lstm_fw_cell, input_keep_prob=0.7)

        with tf.variable_scope('backward_lstm_cell'):
            lstm_bw_cell = tf.nn.rnn_cell.LSTMCell(num_units=self.num_hidden, use_peepholes=False, 
                                                forget_bias=0.0, 
                                                initializer=tf.random_uniform_initializer(-0.003, 0.003),
                                                state_is_tuple=True)
            if not reuse:
                lstm_bw_cell = tf.nn.rnn_cell.DropoutWrapper(cell=lstm_bw_cell, input_keep_prob=0.7)

        with tf.name_scope("forward_lstm"):
            outputs_fw, output_states_fw  = tf.nn.dynamic_rnn(
                cell=lstm_fw_cell,
                inputs=embed_inputs_fw,
                dtype=tf.float32,
                sequence_length=self.seq_len_l
                )

        with tf.name_scope("backward_lstm"):
            outputs_bw, output_states_bw  = tf.nn.dynamic_rnn(
                cell=lstm_bw_cell,
                inputs=embed_inputs_bw,
                dtype=tf.float32,
                sequence_length=self.seq_len_r
                )
1

1 Answers

4
votes

Just use tf.variable_scope instead of tf.name_scope. tf.name_scope doesn't add prefixes to the variables created with tf.get_variable().