I've read the paper Visualizing and Understanding Convolutional Networks by Zeiler and Fergus and would like to make use of their visualization technique. The paper sounds promising - but unfortunately, I have no idea how to implement it in Keras (version 1.2.2).
Two questions:
Keras only provides the
Deconvolution2DLayer but noUnpoolingand no "reverse ReLU" Layer. How can I make use of those switch variables mentioned in the paper in order to implement the unpooling? How do I have to use the reverse ReLU (or is it just the "normal"ReLU)?Keras
Deconvolution2Dlayer has the attributesactivationandsubsample. Maybe those are the key for solving my problem?! If yes, I would have to replace all my combination of LayersConvolution2D+Activation+Poolingwith a singleDeconvolution2DLayer, right?
I appreciate your help!