1
votes

I'd like to find a similar image using PGGAN generator for a real input image based on Encoder-Generator training. Below is my implementation:

# load pre-trained generator
sess = tf.InteractiveSession()
with open('network-snapshot-final.pkl', 'rb') as file:
    G, D, Gs = pickle.load(file)

# network parameters
image_size = 1024
input_shape = (image_size, image_size, 1)
batch_size = 8
kernel_size = 3
filters = 16
latent_dim = 512
epochs = 100

# build an encoder
inputs = Input(shape=input_shape, name='encoder_input')
x = inputs
for i in range(10):
    filters *= 2
    x = Conv2D(filters=filters,
               kernel_size=kernel_size,
               activation='relu',
               strides=2,
               padding='same')(x)

# generate latent vector
x = Flatten()(x)
x = Dense(2048, activation='relu')(x)
z_sim = Dense(latent_dim, name='z_sim')(x)

encoder = Model(inputs, z_sim, name='encoder')

# define a custom loss function
def loss_enc(x, z_sim):
    im_g = tf.convert_to_tensor(Gs.run(z_sim.eval(), labels))
    im_g2 = tf.reshape(im_g, [-1, 1024, 1024, 1])
    los = mse(K.flatten(x), K.flatten(im_g2))
    return los

After compiling the model, I encountered error messages as follows:

encoder.compile(optimizer='rmsprop', loss=loss_enc)

InvalidArgumentError: You must feed a value for placeholder tensor 'encoder_input_19' with dtype float and shape [?,1024,1024,1] [[{{node encoder_input_19}} = Placeholderdtype=DT_FLOAT, shape= [?,1024,1024,1], _device="/job:localhost/replica:0/task:0/device:GPU:0"]] [[{{node z_sim_12/BiasAdd/_713}} = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_127_z_sim_12/BiasAdd", tensor_type=DT_FLOAT, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

How can I correctly implement the loss function for this purpose?

1
How do you define Gs and labels?keineahnung2345

1 Answers

0
votes

Firstly:

def loss_enc(x, z_sim):
   def loss(y_pred, y_true):
     # Things you would do with x, z_sim and store in 'result' (for example)
   return result
return loss

When you compile the model:

encoder.compile(optimizer='rmsprop', loss=loss_enc(x, z_sim))