1
votes

I am trying to build an autoencoder with only one layer:

from keras import backend as K

def cost2(y_true, y_pred):
    print "shapes:", model.get_weights()[0].shape
    yy = K.dot( y_pred, model.get_weights()[0].T )
    return np.sum((y_true - yy)**2)

x = Input(shape=(original_dim,))
y = Dense(latent_dim)(x)
model = Model(inputs=x, outputs=y)
model.summary()
model.compile(optimizer='adagrad', loss=cost2)

This gives me error:

Layer (type)                 Output Shape              Param #   
=================================================================
input_1 (InputLayer)         (None, 784)               0         
_________________________________________________________________
dense_1 (Dense)              (None, 2)                 1570      
=================================================================
Total params: 1,570
Trainable params: 1,570
Non-trainable params: 0
_________________________________________________________________

shapes: (784, 2)

Traceback (most recent call last): File "vae_kears_gidital_mnist3.py", line 45, in model.compile(optimizer='adagrad', loss=cost2) File "/Users/asgharrazavi/anaconda/lib/python2.7/site-packages/keras/engine/training.py", line 830, in compile sample_weight, mask) File "/Users/asgharrazavi/anaconda/lib/python2.7/site-packages/keras/engine/training.py", line 429, in weighted score_array = fn(y_true, y_pred) File "vae_kears_gidital_mnist3.py", line 18, in cost2 yy = K.dot( y_pred, model.get_weights()[0].T ) File "/Users/asgharrazavi/anaconda/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 1048, in dot if ndim(x) is not None and (ndim(x) > 2 or ndim(y) > 2): File "/Users/asgharrazavi/anaconda/lib/python2.7/site-packages/keras/backend/tensorflow_backend.py", line 606, in ndim dims = x.get_shape()._dims AttributeError: 'numpy.ndarray' object has no attribute 'get_shape'

I am simply trying to multiply output of model by the transpose weights of the model to get back to input dimensions. Any ideas?

1

1 Answers

1
votes

Your cost function should return a keras tensor not an numpdy ndarray. You should use only use keras.backend functions or your specific backend functions (e.g. tf.something) in the custome loss function (i.e. K.sum instead of np.sum)

This is the cause of the error you mentioned in the question but more importantly you're not making the model in keras way of creating autoencoders. In keras your model would be created with two layers (encoder and decoder) with layers sharing the weights with a transpose and a standard MSE loss. I suggest you read this post of keras blog and take a look at this issue.