I am trying out Google's deepdream code which makes use of Caffe. They use the GoogLeNet model pre-trained on ImageNet, as provided by the ModelZoo. That means the network was trained on images cropped to the size 224x224 pixel. From the train_val.prototext:
layer {
name: "data"
type: "Data"
...
transform_param {
mirror: true
crop_size: 224
...
The deploy.prototext used for processing also defines an input layer with the size of 224x224x3x10 (RGB images of size 224x224, batchsize 10).
name: "GoogleNet"
input: "data"
input_shape {
dim: 10
dim: 3
dim: 224
dim: 224
}
However I can use this net to process images of any size (the example above used one of 1024x574 pixel).
- deploy.prototext does not configure caffe to use cropping.
- The preprocessing in the deepdream code only does demeaning, also no cropping here
How is it possible that I can run on images which are too big for the input layer?
complete code can be found here