- I am building a prediction model for the sequence data using conv1d layer provided by Keras. This is how I did`
def autoencoder(): #autoencoder = Model(inputs=input_layer, outputs=decoder) input_dim = x_train_scaled.shape[1] input_layer = Input(shape=(input_dim,)) conv1 = Conv1D(filters = 32, kernel_size=3,activation='relu') (input_layer) batch1 = BatchNormalization()(conv1) maxp1 = MaxPooling1D(pool_size=2)(batch1) dropout1 = Dropout(0.2)(maxp1) conv2 = Conv1D(filters = 16, kernel_size=3,activation='relu') (dropout1) batch2 = BatchNormalization()(conv2) maxp2 = MaxPooling1D(2)(batch2) dropout2 = Dropout(0.2)(maxp2) conv3 = Conv1D(filters = 8, kernel_size=3,activation='relu') (dropout2) batch3 = BatchNormalization()(conv3) maxp3 = MaxPooling1D(2)(batch3) dropout3 = Dropout(0.2)(maxp3) #decoder layers conv4 = Conv1D(filters = 8, kernel_size=3,activation='relu') (dropout3) batch4 = BatchNormalization()(conv4) dropout4 = Dropout(0.2)(batch4) conv5 = Conv1D(filters = 16, kernel_size=3,activation='relu') (dropout4) batch5 = BatchNormalization()(conv5) unsamp5 = UpSampling1D(2)(batch5) dropout5 = Dropout(0.2)(unsamp5) conv6 = Conv1D(filters = 32, kernel_size=3,activation='relu') (dropout5) batch6 = BatchNormalization()(conv6) unsamp6 = UpSampling1D(2)(batch6) dropout6 = Dropout(0.2)(unsamp6) decoder = Conv1D(filters = 1, kernel_size=3,activation='sigmoid') (dropout6) return Model(input_layer, decoder)
- Train model to reduce the data dimension using autoencoder
model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'])
model.fit(x_train_scaled, x_train_scaled, epochs=15, batch_size=32, verbose=verbose, shuffle=True)
- However, the debugging information has
ValueError: Input 0 of layer conv1d is incompatible with the layer: : expected min_ndim=3, found ndim=2. Full shape received: (None, 19)
- The training data and validation data shape are as follows
x_train_scaled shape (125973, 19)
- Dataset use to train model NSL-KDD(https://www.unb.ca/cic/datasets/nsl.html)