I try to create an LSTM autoencoder with keras
While, it shows a value error at the end of first epoch
ValueError: operands could not be broadcast together with shapes (32,20) (20,20) (32,20)
The shape of model input is (sample_size,20,31), and following is the model
The sampling function:
def sampling(args):
z_mean, z_log_var = args
batch = K.shape(z_mean)[0]
dim = K.int_shape(z_mean)[1]
# by default, random_normal has mean=0 and std=1.0
epsilon = K.random_normal(shape=(batch,dim))
return z_mean + K.exp(0.5 * z_log_var) * epsilon
The encoder part :
inputs = Input(shape=(lag,data.shape[1],), name='encoder_input')
x = LSTM(30,activation='relu',return_sequences=True) (inputs)
x = LSTM(60,activation='relu') (x)
z_mean = Dense(60, name='z_mean')(x)
z_log_var = Dense(60, name='z_log_var')(x)
z_temp = Lambda(sampling, output_shape=(60,), name='z')([z_mean, z_log_var])
z = RepeatVector(lag)(z_temp)
encoder = Model(inputs, [z_mean, z_log_var, z], name='encoder')
The decoder part :
latent_inputs = Input(shape=(lag,60), name='z_sampling')
x_2 = LSTM(60, activation='relu',return_sequences= True)(latent_inputs)
x_2 = LSTM(data.shape[1], activation='relu',return_sequences= True)(x_2)
decoder = Model(latent_inputs, x_2, name='decoder')
outputs = decoder(encoder(inputs)[2])
vae = Model(inputs, outputs)
And loss and fit part:
outputs = decoder(encoder(inputs)[2])
vae = Model(inputs, outputs)
reconstruction_loss = mse(inputs, outputs)
kl_loss = 1 + z_log_var - K.square(z_mean) - K.exp(z_log_var)
kl_loss = K.mean(kl_loss)
kl_loss *= -0.1
vae_loss = reconstruction_loss + kl_loss
vae.add_loss(vae_loss)
vae.compile(optimizer='adam')
vae.fit(train,epochs=100)
It will cause this error :
Epoch 1/100
632256/632276 [============================>.] - ETA: 0s - loss: 0.0372
ValueError: operands could not be broadcast together with shapes (32,20) (20,20) (32,20)
If there is a shape error, how dose model work at previous step. That's my main problem, thanks for your answer
sampling
? – rvinas