0
votes

I am running a simple convolutional neural network, doing regression and predicting the results. It predicts 30 outputs (floats)

The prediction results are almost the same irrespective of any input. (converging to mean on trained outputs)

The training after 1000 iterations converges to maximum loss of 0.0107 (which is good one) based on this dataset.

What is causing this? 

I tried to set the bias to 1.0, it brings little variables but still the same below. When i set bias to 0, the results are far worse, all outputs are 100% same. i am already using regularisation max(0,x) no improvement with results.

The outputs are below. As you can see, the first, second, third arrays are almost same..

 [[ 66.60850525  37.19641876  29.36295891 ...,  71.91300964  47.92261505
   85.02180481]
 [ 66.4874115   37.09647369  29.23101997 ...,  71.90777588  47.74259186
   85.10979462]
 [ 66.54870605  37.19485474  29.36085892 ...,  71.84892273  47.8970108
   85.05699921]
 ..., 
 [ 65.7435379   36.78604889  28.57537079 ...,  71.98916626  47.03699493
   85.88017273]
 [ 65.7435379   36.78604889  28.57537079 ...,  71.98916626  47.03699493
   85.88017273]
 [ 65.7435379   36.78604889  28.57537079 ...,  71.98916626  47.03699493
   85.88017273]]

The network model runs with this parameters

base_lr: 0.001
lr_policy: "fixed"
display: 100
max_iter: 1000
momentum: 0.9
1
What are these numbers that you printed? Are these the output of the neural network? If yes, their range might be the problem. Try scaling your inputs and outputs in the range (0,1) both for the train and test sets.Amin Suzani
@AminSuzani The outputs are shown above which are already scaled and multiplied.pbu
Do you think the network weights are large and overfitting?pbu

1 Answers

0
votes

Judging by the output and the fact that bias affect the result greatly I have a feeling that maybe you didn't normalize your input and output.

Try to normalize them between -1 & +1.