I am working on a regression problem with the following sample training data .
As shown I have an input of only 4 parameters with only one of them changing which is Z so the rest have no real value while an output of 124 parameters denoted from O1 to O124 Noting that O1 changes with a constant rate of 20 [1000 then 1020 then 1040 ...] while O2 changes with a different rate which is 30 however still constant and same applies for all the 124 outputs ,all changes linearily in a constant way.
I believed it's a trivial problem and a very simple neural network model will reach a 100% accuracy on testing data but the results were the opposite.
- I reached 100% test accuracy using a linear regressor and 99.99997% test accuracy using KNN regressor
- I reached 41% test data accuracy in a 10 layered neural network using relu activation while all the rest activation functions failed and shallow relu also failed
- Using a simple neural network with linear activation function and no hidden layers I reached 92% on the test data
My Question is how can I get the neural network to get 100% on test data like the linear Regressor? It is supposed that using a shallow network with linear activation to be equivilant to the linear regressor but the results are different ,am I missing something ?
