I currently have a fully connected autoencoder in keras which looks like this:
model.add(Dense(4096, input_shape=(4096,), activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Dense(512, activation='relu'))
model.add(Dense(4096, activation='relu'))
The data consists of timerseries data which is converted into frequency domain using FFT.
My training data has the following shape: (8000, 4096) where i have 8000 samples and the 4096 samples represent the frequencies. This model is working fine.
What I'm trying to achieve is replacing the two dense layers which has 512 units with a Conv1d to see if this will improve my results, something like this:
model = Sequential()
model.add(Dense(4096, input_shape=(4096,), activation='relu'))
model.add(Conv1D(512,3, activation='relu'))
model.add(Dense(64, activation='relu'))
model.add(Conv1D(512,3, activation='relu'))
model.add(Dense(4096, activation='relu'))
Now, this wont work because Conv1D is expecting my data to have 3 dimensions:
Input 0 is incompatible with layer conv1d_12: expected ndim=3, found ndim=2
How can I ensure that the Conv1d layers gets the correct input shape?
- Use Model.add(Reshape(?,?,?)) possibly?
- Reshape my input data to have 3 dimensions somehow?
I have tried changing the input shape to "force" a third dimension and reshaped between the first Dense layer and the first Conv1D layer, but that does not seem to work.
I realise that there are tons of questions regarding the input shape of Conv1D nets here, but please note that I do not want the convolutional filter to span across multiple samples, only across the frequency values.
Thanks in advance.
UPDATE: Following daniels advice I was able to compile the model and start training (allthough my GPU is screaming at me)
Layer (type) Output Shape Param #
=================================================================
dense_132 (Dense) (None, 4096) 16781312
_________________________________________________________________
reshape_85 (Reshape) (None, 4096, 1) 0
_________________________________________________________________
conv1d_71 (Conv1D) (None, 4096, 512) 2048
_________________________________________________________________
reshape_86 (Reshape) (None, 2097152) 0
_________________________________________________________________
dense_133 (Dense) (None, 64) 134217792
_________________________________________________________________
reshape_87 (Reshape) (None, 64, 1) 0
_________________________________________________________________
conv1d_72 (Conv1D) (None, 64, 512) 2048
_________________________________________________________________
reshape_88 (Reshape) (None, 32768) 0
_________________________________________________________________
dense_134 (Dense) (None, 4096) 134221824
=================================================================
Total params: 285,225,024
Trainable params: 285,225,024
Non-trainable params: 0
_________________________________________________________________
However, I would expect my Conv1d layers to have the following output shape:
conv1d_71 (Conv1D) (None, 512, 1)
Am i doing the convolution over the wrong dimension? And if so, how can i change that?, or have i misunderstood how a convolutional layer works?
model.add(Reshape(input_shape=(4096,), target_shape=(4096, 1)))- Primusa