4
votes

I have built a model which constists of two branches which are then merged into a single one. For the training of the model I would like to use the ImageGenerator to augement the image data, but don't know how to make work for the mixed input type. Does anybody have an idea how to deal with this in keras? Any help would be highly appreciated!

Best, Nick

MODEL The first branchen takes images as inputs:

img_model = Sequential()
img_model.add(Convolution2D( 4, 9,9, border_mode='valid', input_shape=(1, 120, 160)))
img_model.add(Activation('relu'))
img_model.add(MaxPooling2D(pool_size=(2, 2)))
img_model.add(Dropout(0.5))
img_model.add(Flatten()) 

The second branch takes auxiliary data as input:

aux_model = Sequential()
aux_model.add(Dense(3, input_dim=3))

Then those get merged into the final model:

model = Sequential()
model.add(Merge([img_model, aux_model], mode='concat'))
model.add(Dropout(0.5))
model.add(Dense(5))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adadelta', metrics=['accuracy']) 

TRAINING / PROBLEM: I tried to do the following which obviously failed:

datagen = ImageDataGenerator(
            featurewise_center=False,  # set input mean to 0 over the dataset
            samplewise_center=False,  # set each sample mean to 0
            featurewise_std_normalization=False,  # divide inputs by std of the dataset
            samplewise_std_normalization=False,  # divide each input by its std
            zca_whitening=False,  # apply ZCA whitening
            rotation_range=10, #180,  # randomly rotate images in the range (degrees, 0 to 180)
            width_shift_range=0.1,  # randomly shift images horizontally (fraction of total width)
            height_shift_range=0.1,  # randomly shift images vertically (fraction of total height)
            horizontal_flip=False,  # randomly flip images
            vertical_flip=False)  # randomly flip images

model.fit_generator( datagen.flow( [X,I], Y, batch_size=64),
               samples_per_epoch=X.shape[0],
               nb_epoch=20,
               validation_data=([Xval, Ival], Yval))

This produces the following error message:

Traceback (most recent call last):
  File "importdata.py", line 139, in <module>
    model.fit_generator( datagen.flow( [X,I], Y, batch_size=64),
  File "/usr/local/lib/python3.5/dist-packages/keras/preprocessing/image.py", line 261, in flow
    save_to_dir=save_to_dir, save_prefix=save_prefix, save_format=save_format)
  File "/usr/local/lib/python3.5/dist-packages/keras/preprocessing/image.py", line 454, in __init__
    'Found: X.shape = %s, y.shape = %s' % (np.asarray(X).shape, np.asarray(y).shape))
  File "/usr/local/lib/python3.5/dist-packages/numpy/core/numeric.py", line 482, in asarray
    return array(a, dtype, copy=False, order=order)
ValueError: could not broadcast input array from shape (42700,1,120,160) into shape (42700)
2

2 Answers

0
votes

I think I got a way to make this work. Assume we have multiple input model.

#declare a final model with multiple inputs.
# final_model ...

train_datagen = ImageDataGenerator(rescale=1./255, shear_range=0.2, zoom_range=0.2)
train_generator = train_datagen.flow_from_directory(train_data_dir, target_size=(224, 224), batch_size=32, class_mode='binary') 

# NOTE: the zip combining multiple image generators with on the fly augmentation.
final_generator = zip(train_generator, train_generator)    
final_model.fit_generator(final_generator, samples_per_epoch=nb_train_samples, nb_epoch=nb_epoch, validation_data=test_generator, nb_val_samples=nb_validation_samples)
0
votes

Use this One:

def trainGeneratorFunc():
    while True:
        xy = trainGeneratorBasic.next()
        yield [xy[0], xy[0], xy[0]], xy[1]

trainGenerator = trainGeneratorFunc()