I am running a simple convolutional neural network, doing regression and predicting the results. It predicts 30 outputs (floats)
The prediction results are almost the same irrespective of any input. (converging to mean on trained outputs)
The training after 1000 iterations converges to maximum loss of 0.0107 (which is good one) based on this dataset.
What is causing this?
I tried to set the bias to 1.0, it brings little variables but still the same below. When i set bias to 0, the results are far worse, all outputs are 100% same. i am already using regularisation max(0,x) no improvement with results.
The outputs are below. As you can see, the first, second, third arrays are almost same..
[[ 66.60850525 37.19641876 29.36295891 ..., 71.91300964 47.92261505
85.02180481]
[ 66.4874115 37.09647369 29.23101997 ..., 71.90777588 47.74259186
85.10979462]
[ 66.54870605 37.19485474 29.36085892 ..., 71.84892273 47.8970108
85.05699921]
...,
[ 65.7435379 36.78604889 28.57537079 ..., 71.98916626 47.03699493
85.88017273]
[ 65.7435379 36.78604889 28.57537079 ..., 71.98916626 47.03699493
85.88017273]
[ 65.7435379 36.78604889 28.57537079 ..., 71.98916626 47.03699493
85.88017273]]
The network model runs with this parameters
base_lr: 0.001
lr_policy: "fixed"
display: 100
max_iter: 1000
momentum: 0.9