0
votes

I am working on a school project, designing a neural network (mlp),

I made it with a GUI so it can be interactive.

For all my neurons I am using SUM as GIN function, the user can select the activation function for each layer.

I have a theoretical question:

  • do I set the threshold,g and a - parameters individually for each neuron or for the entire layer?

Image of the project so far

1

1 Answers

1
votes

Looks nice ! You can have 3 hidden layers, but you'll see with experimenting, you will rarely need that many layers. What is your training pattern ?

Answer to your question depends on your training pattern and purpose of input neurons.. when e.g. some input neuron has a different type of value, you could use another threshold function or different settings for parameters in neurons connected to that input neuron.

But in general, it is better to feed neural network input into seperate perceptrons. So, the answer is: in theory, you could preset individual properties of neurons.. but in practice of back-propagation learning, it is not needed. There are no "individual properties" of neurons, the weight values that result of your training cycles will differ every time. All initial weights can be set on a small random value, transfer threshold and learning rate are to be set per layer.