0
votes

I have autoencoder model in tensorflow 1x (not a keras) I am trying to split the model to encoder and decoder after training.

both function in same scope and I have 3 PlaceHolders

self.X = tf.placeholder(shape=[None, vox_res64, vox_res64, vox_res64, 1], dtype=tf.float32)
self.Z = tf.placeholder(shape=[None,500], dtype=tf.float32)

self.Y = tf.placeholder(shape=[None, vox_rex256, vox_rex256, vox_rex256, 1], dtype=tf.float32)

 with tf.variable_scope('aeu'):
            self.lfc=self.encoder(self.X)

            self.Y_pred, self.Y_pred_modi = self.decoder(self.lfc)

the enocder and decoder as follow

    def encoder(self,X):
        with tf.device('/gpu:'+GPU0):
            X = tf.reshape(X,[-1, vox_res64,vox_res64,vox_res64,1])
            c_e = [1,64,128,256,512]
            s_e = [0,1 , 1, 1, 1]
            layers_e = []
            layers_e.append(X)
            for i in range(1,5,1):
                layer = tools.Ops.conv3d(layers_e[-1],k=4,out_c=c_e[i],str=s_e[i],name='e'+str(i))
                layer = tools.Ops.maxpool3d(tools.Ops.xxlu(layer, label='lrelu'), k=2,s=2,pad='SAME')
                layers_e.append(layer)

            ### fc
            [_, d1, d2, d3, cc] = layers_e[-1].get_shape()
            d1=int(d1); d2=int(d2); d3=int(d3); cc=int(cc)
            lfc = tf.reshape(layers_e[-1],[-1, int(d1)*int(d2)*int(d3)*int(cc)])
            lfc = tools.Ops.xxlu(tools.Ops.fc(lfc, out_d=500,name='fc1'), label='relu')
            print (d1)
            print(cc)
        return lfc


    def decoder(self,Z):
        with tf.device('/gpu:'+GPU0):


            lfc = tools.Ops.xxlu(tools.Ops.fc(Z, out_d=2*2*2*512, name='fc2'), label='relu')

            lfc = tf.reshape(lfc, [-1,2,2,2,512])

            c_d = [0,256,128,64]
            s_d = [0,2,2,2]
            layers_d = []
            layers_d.append(lfc)
            for j in range(1,4,1):

                layer = tools.Ops.deconv3d(layers_d[-1],k=4,out_c=c_d[j],str=s_d[j],name='d'+str(len(layers_d)))

                layer = tools.Ops.xxlu(layer, label='relu')
                layers_d.append(layer)
            ###
            layer = tools.Ops.deconv3d(layers_d[-1],k=4,out_c=1,str=2,name='dlast')
            print("****************************",layer)
            ###
            Y_sig = tf.nn.sigmoid(layer)
            Y_sig_modi = tf.maximum(Y_sig,0.01)

        return Y_sig, Y_sig_modi

when I try to use model after training


 X = tf.get_default_graph().get_tensor_by_name("Placeholder:0")
 Z = tf.get_default_graph().get_tensor_by_name("Placeholder_1:0")
 Y_pred = tf.get_default_graph().get_tensor_by_name("aeu/Sigmoid:0")
 lfc = tf.get_default_graph().get_tensor_by_name("aeu/Relu:0")


fetching latent code work fine

 lc = sess.run(lfc, feed_dict={X: x_sample})

now I want to use the latent code as input to decoder I get error I have to fill X(PLACEHOLDER)

 y_pred = sess.run(Y_pred, feed_dict={Z: lc})

how I can split to encoder decoder? I searched only I found keras examples

2

2 Answers

1
votes

The first thing that I notice is that you haven't passed in self.Z anywhere into the decoder. So tensorflow can't automatically just link that placeholder with the z that you previously used.

There's a couple of things you can do to fix this. The easiest is to attempt to recreate the decoder graph but when you call variable scope, set reuse=True.


    with tf.variable_scope('aeu',reuse=True):
        self.new_Y, self.new_Y_modi = self.decoder(self.Z)

    y_pred = sess.run(self.new_Y, feed_dict={self.Z: lc})

This is the method that is probably easiest to do. You may be asked to fill in placeholder X in this case as well, but you can just fill that in with an empty array. Normally Tensorflow won't ask for it unless there's some sort of control dependency that ties the two together.

0
votes

I found how to split the model .

I will put the answer if anybody want to know

my mistakes are :

1: I did not pass self.Z to a decoder

2: for the following line

y_pred = sess.run(Y_pred, feed_dict={Z: lc})

this line in different file after i TRAINED MY MODEL tensorflow will not know what does [ z ] refer to, so you have to use the same variable you identified your tensor with as the following

 lfc = tf.get_default_graph().get_tensor_by_name("aeu/Relu:0")

I name it [ lfc ] not [ Z ]

so changing the code to that solve the issue

y_pred = sess.run(Y_pred, feed_dict={lfc: lc})