0
votes

I am working with convolutional autoencoders. My autoenoder configuration has one convolutional layer with stride (2,2) or avg-pooling and relu activation and one deconvolutional layer with stride (2,2) or avg-unpooling and relu activation.

I trained the autoencoder with the MNIST data set.

When I am looking at the feature maps after the first convolutional layer (20 filters with filter size 3), I got some black feature maps instead the learned filters are not black. The same happens if I change the number of filters or the filter size.

I get this phenomena with TensorFlow and Theano autoencoders. (I did not test other neural network software yet.)

Does anyone know why this happens?

I can avoid the black feature maps when adding a LRN layer but I want to understand why the black feature maps appear.

1
What do you mean by "black" - zeros everywhere? Why do you think that this is a problem?kafman
Thank you for your respond. Yes, "black" means zero for me. My aim is to understand why the output becomes zero instead the filters are not zero. Can you explain this phenomenon? In a second step I take the output after the first convolutional layer and use it as an input for a fully connected neural network. I think the zero outputs corrupt the data that ends in a worse classification than with the original MNIST data set.user7473657
Did you flattened the output of relu-activation ??? and give it to a fully connected layer ?? You network should have hidden unitsJai

1 Answers

0
votes

I found the same phenomenon. After training an convolutional autoencoder with 7x7x3x6 for thousands RGB images, two or three filters has some outputs, other filters gets zero outputs. And the error does not decrease when they has too many zero output filters. I also changed the filter numbers and sizes but results were almost the same.