23
votes

I have implemented a simple neural network framework which only supports multi-layer perceptrons and simple backpropagation. It works okay-ish for linear classification, and the usual XOR problem, but for sine function approximation the results are not that satisfying.

I'm basically trying to approximate one period of the sine function with one hidden layer consisting of 6-10 neurons. The network uses hyperbolic tangent as an activation function for the hidden layer and a linear function for the output. The result remains a quite rough estimate of the sine wave and takes long to calculate.

I looked at encog for reference and but even with that I fail to get it work with simple backpropagation (by switching to resilient propagation it starts to get better but is still way worse than the super slick R script provided in this similar question). So am I actually trying to do something that's not possible? Is it not possible to approximate sine with simple backpropagation (no momentum, no dynamic learning rate)? What is the actual method used by the neural network library in R?

EDIT: I know that it is definitely possible to find a good-enough approximation even with simple backpropagation (if you are incredibly lucky with your initial weights) but I actually was more interested to know if this is a feasible approach. The R script I linked to just seems to converge so incredibly fast and robustly (in 40 epochs with only few learning samples) compared to my implementation or even encog's resilient propagation. I'm just wondering if there's something I can do to improve my backpropagation algorithm to get that same performance or do I have to look into some more advanced learning method?

4
Did you ever get it to work? Facing the same problem.Lex
Don't think so but can't really recall all the details anymore as this was 4 years ago. The nnet package mentioned above is implemented in C and is only 700 lines of code and then some R wrapping on top of it. Perhaps looking into that will give you some ideas.Muton
I am implementing this in C/C++. My Network uses 6 neurons in the single hidden layer and uses tanh activation in the hidden layer and linear activation in output layer. It converges in 1600 epochs, without using momentum or optimizer, is that what you are asking for, or is 40 epochs the benchmark for your target? Although, I am working on adding optimizer and momentum to my network, I'll share it here. Should I share my method, till now?Pe Dro
If you have a well performing model, and can provide a clear and concise answer, please share your work and I will accept the answer.Muton

4 Answers

10
votes

This can be rather easily implemented using modern frameworks for neural networks like TensorFlow.

For example, a two-layer neural network using 100 neurons per layer trains in a few seconds on my computer and gives a good approximation:

enter image description here

The code is also quite simple:

import tensorflow as tf
import numpy as np

with tf.name_scope('placeholders'):
    x = tf.placeholder('float', [None, 1])
    y = tf.placeholder('float', [None, 1])

with tf.name_scope('neural_network'):
    x1 = tf.contrib.layers.fully_connected(x, 100)
    x2 = tf.contrib.layers.fully_connected(x1, 100)
    result = tf.contrib.layers.fully_connected(x2, 1,
                                               activation_fn=None)

    loss = tf.nn.l2_loss(result - y)

with tf.name_scope('optimizer'):
    train_op = tf.train.AdamOptimizer().minimize(loss)

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())

    # Train the network
    for i in range(10000):
        xpts = np.random.rand(100) * 10
        ypts = np.sin(xpts)

        _, loss_result = sess.run([train_op, loss],
                                  feed_dict={x: xpts[:, None],
                                             y: ypts[:, None]})

        print('iteration {}, loss={}'.format(i, loss_result))
3
votes

You're definitely not trying for the impossible. Neural networks are universal approximators - meaning that for any function F and error E, there exists some neural network (needing only a single hidden layer) that can approximate F with error less than E.

Of course, finding that (those) network(s) is a completely different matter. And the best I can tell you is trial and error... Here's the basic procedure:

  1. Split your data into two parts: a training set (~2/3) and a testing set (~1/3).
  2. Train your network on all of the items in the training set.
  3. Test (but don't train) your network on all the items in the testing set and record the average error.
  4. Repeat steps 2 and 3 until you've reached a minimum testing error (this happens with "overfitting" when your network starts to get super good at the training data to the detriment of everything else) or until your overall error ceases to notably decrease (implying the network's as good as it's going to get).
  5. If the error at this point is acceptably low, you're done. If not, your network isn't complex enough to handle the function you're training it for; add more hidden neurons and go back to the beginning...

Sometimes changing your activation function can make a difference, too (just don't use linear, as it negates the power of adding more layers). But again, it'll be trial and error to see what works best.

Hope that helps (and sorry I can't be more useful)!

PS: I also know it's possible since I've seen someone approximate sine with a network. I want to say she wasn't using a sigmoid activation function, but I can't guarantee my memory on that count...

0
votes

A similar implementation with sklearn.neural_network:

from sklearn.neural_network import MLPRegressor
import numpy as np

f = lambda x: [[x_] for x_ in x]
noise_level = 0.1
X_train_ = np.arange(0, 10, 0.2)
real_sin = np.sin(X_train_)
y_train = real_sin+np.random.normal(0,noise_level,len(X_train_))    
N = 100
regr = MLPRegressor(hidden_layer_sizes= tuple([N]*5)).fit(f(X_train_), y_train)
predicted_sin = regr.predict(f(X_train_))

The result looks something like this: predicted sin with NN

0
votes

I've tried a lot of things with keras, ( different activations, layers , etc )

LSTM's work but not f(x) -> y , as input sequences of y.

I also tried @Jonas Adler's answer, it works ( I also convert it to tf2 ) , but giving 100 random samples from 0-10 rad's in every epoch - step is not fit my current workflow.

After creating train set and test set, and using same code did not work ( very low acc. )

My last attempt was using NEAT, Checkout repo, or link. It approximates all 360 deg.s with %80 acc. with a 45 sample. And it has 16 hidden units. I haven't played with parameters or wait enough generations to increase accuracy, but it could be better. And finally, there's "sin" activation function in python-neat, using it would be cheating.

This is the link experiment : https://github.com/firatsarlar/neat_sin_exp/blob/main/sin_exp.ipynb