I am having some issues with using neural network. I am using a non linear activation function for the hidden layer and a linear function for the output layer. Adding more neurons in the hidden layer should have increased the capability of the NN and made it fit to the training data more/have less error on training data.
However, I am seeing a different phenomena. Adding more neurons is decreasing the accuracy of the neural network even on the training set.

Here is the graph of the mean absolute error with increasing number of neurons. The accuracy on the training data is decreasing. What could be the cause of this?
Is it that the nntool that I am using of matlab splits the data randomly into training,test and validation set for checking generalization instead of using cross validation.
Also I could see lots of -ve output values adding neurons while my targets are supposed to be positives. Could it be another issues?
I am not able to explain the behavior of NN here. Any suggestions? Here is the link to my data consisting of the covariates and targets
mapminmaxnormalization? Matlab applies the normalization by default, and (I might be wrong) the error scale is also normalized. If you have very high outputs and do not normalize it will saturate your non linear activation functions (if you use any) which would explain this non expected behavior (adding more saturated neurons would saturate even more the output). Dont forget to check your error evolution during the training stage to see if your networks are really training. - Werner