I'm using tensorflow to build a simple autoencoder model, however there's this strange bug that I can't diagnose I have a loss function that looks like this:
def loss_func(x,y):
return 0.5 * tf.reduce_mean(tf.pow(x-y, 2 ) )
total loss is then calculated by:
return self.loss_func(x , input) + self.reg_fac * reg
now the problem is, when setting reg_fac
to 0
the loss returns as a positive number and the models seems to train well, but when increasing reg_fac the loss decreases and reaches negative values and keeps decreasing
reg
is calculated as this for each autoencoder used:
return tf.reduce_mean(tf.pow(self.w1, 2)) + tf.reduce_mean(tf.pow(self.w2, 2))
where w1
is the encoder weights and w2
is the decoder weights.
I know it's a stupid bug, but I can't find it.
my complete code is uploaded here: https://github.com/javaWarrior/dltest
important files:
ae.py: autoencoders model,
sae.py: stacked autoencoders model,
mew.py: testing model on extracted features of nus_wide images using SIFT,
nus_wide.py: just an interface for nuswide