3
votes

I'm trying to approximate following function: enter image description here

but my best result looks like:

enter image description here (loss function on the right) I've tried even with 50k epochs, similar results.

Model:

model = Sequential()
model.add(Dense(40, input_dim=1,kernel_initializer='he_normal', activation='relu'))
model.add(Dense(20, activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(1,input_dim=1, activation=activation_fun))
model.compile(loss='mse', optimizer='adam', metrics=['mse', 'mae', 'mape', 'cosine'])
history = model.fit(x, y, batch_size=32, epochs=5000, verbose=0)

preds = model.predict(x_test)
prettyPlot(x,y,x_test,preds,history,'linear',5000)
model.summary()

prettyPlot function creates plots.

How can I achieve better results without changing the topology of the NN? I don't want it to be big or wide. I want to use even less hidden layers and neurons, if possible.

The function that I want to approximate:

def fun(X):
    return math.sin(1.2*X + 0.5) + math.cos(2.5*X + 0.2) + math.atan(2*X + 1) -  math.cos(2*X + 0.5) 

samples:

range = 20
x = np.arange(0, range, 0.01).reshape(-1,1)
y = np.array(list(map(fun, x))).reshape(-1,1)

x_test = (np.random.rand(range*10)*range).reshape(-1,1)
y_test = np.array(list(map(fun, x_test))).reshape(-1,1)

Then y and y_test are normalized using MinMaxScaler.

scalerY= MinMaxScaler((0,1))
scalerY.fit(y)
scalerY.fit(y_test)
y = scalerY.transform(y)
y_test = scalerY.transform(y_test)

Activation function in the last layer is linear.

1
What is activation_fun in the last layer?today
Further, please add the code where you normalize y and y_test.today
@today I added code. Activation fun in the last layer is linear.Artur
I've also tried with sigmoid as a activation_fun in the last layer, similar results.Artur
Well, first don't normalize y and y_test. It does not make sense at all since you want to approximate the function fun (so you need to preserve the real values; not to mention that such normalization by scaling completely distort the values). Second, if you want to normalize something it should be x and x_test. Third, modify the learning rate of optimizer (i.e. keras.optimizers.Adam(lr=0.01)) or try a different optimizer like RMSprop. Fourth, it is best practice to add a validation data to make sure you are not over-fitting. Let me know if any of these does not make sense.today

1 Answers

1
votes

If you still have problems with the accuracy try using a lot of data points like 10k-100k.