0
votes

While using pre-trained VGG19 with with TimeDistributed in Keras, I have the following Error :

TypeError: can only concatenate tuple (not "list") to tuple

This is in windows, Keras, python3.6

def build_vgg(self):
    img = Input(shape=(self.n_frames, self.img_rows, self.img_cols, 3))

    # Get the vgg network from Keras applications
    vgg = VGG19(weights="imagenet", include_top=False,  input_shape=(self.img_rows, self.img_cols, 3))

    # Output the first three pooling layers
    outputs = [vgg.layers[i].output for i in self.vgg_layers]

    vgg.outputs = outputs 
    # Create model and compile,         

    vggmodel = Model(inputs=vgg.inputs, outputs=outputs)
    vggmodel.trainable = False
    h2 = layers.wrappers.TimeDistributed(vggmodel)(img)
    model = Model(inputs=img,outputs=h2)
    model.compile(loss='mse', optimizer='adam')

    return model

I am expecting the trained VGG19 model will be loaded and TimeDistributed wrapper will use the trained model and make it works in Video.

The error displayed when executing this line on code:

h2 = layers.wrappers.TimeDistributed(vggmodel)(img)
1
here is the issue: in Wrapper class of Keras, while computing the Output shape, this line return (child_output_shape[0], timesteps) + child_output_shape[1:], where the ipdb> child_output_shape[1:] [(None, 128, 128, 128), (None, 128, 128, 256)] ipdb> (child_output_shape[0], timesteps) ((None, 256, 256, 64), 4)Ahmed

1 Answers

0
votes

i re-write it that way and it works fine

def build_vgg(self):

    video  = Input(shape=(self.n_frames, self.img_rows, self.img_cols, 3))

    # Get the vgg network from Keras applications
    vgg = VGG19(weights="imagenet", include_top=False,  input_shape=(self.img_rows, self.img_cols, 3))

    # Output the first three pooling layers
    vgg.outputs = [vgg.layers[i].output for i in self.vgg_layers]

    # Create model and compile, 


    vggmodel = Model(inputs=vgg.inputs, outputs=vgg.outputs)
    #vggmodel.trainable = False
    h2 = []
    for out in vggmodel.output:
        h2.append(layers.wrappers.TimeDistributed(Model(vggmodel.input,out))(video))

    model = Model(inputs=video, outputs=h2)
    model.trainable = False
    model.compile(loss='mse', optimizer='adam')

    return model