1
votes

I am getting massive loss numbers and 0 accuracy when I run my linear regression ANN (predicting California housing prices). Can anyone suggest any better activation functions for this type of problem.

https://drive.google.com/file/d/1dcUuTVVDGwxHn2O5qqJk0wgiEf83MslN/view?usp=sharing

I tried many iterations of a loss rate from .1 to 10, tried 2 layers with ReLU 3 neutrons, tried increasing the epochs to 10K, Tried softmax.

from keras.models import Sequential
from keras.layers import Dense
from keras.optimizers import Adam


model = Sequential()
model.add(Dense(2, input_shape=(6,), activation='relu'))
model.add(Dense(3, activation='relu'))
model.add(Dense(2, activation='softmax'))
model.add(Dense(1, activation='linear'))
model.compile(Adam(lr=0.5),
          loss='mean_squared_error',
          metrics=['accuracy'])


model.fit(X_train, y_train, epochs=10000, verbose=2,     validation_split=0.4)

Epoch 60/10000 - 1s - loss: 48621637708.0739 - acc: 0.0000e+00 - val_loss: 49522900789.2154 - val_acc: 0.0000e+00

1

1 Answers

2
votes

You are missing something fundamental about Deep Learning here. Accuracy is a metric used for classification, but what you are trying to do is regression, i.e. not predicting class labels but continuous values. Two different things in the Deep Learning wold. Therefore softmax as output layer won't help you much. In this case your metric should be as well MSE.

Learning rates above 1.0 are also very uncommon, default value for Adam is 0.001. In general, if you are unsure about learning rates stick with the default values. So maybe the error lays there, try reducing the learning rate and give it another shot.

Softmax activation as intermediate layer activation is also unusual, I would recommend to replace it with ReLU. The number of neurons you use is also very small, adding a few more could help as well.