I'm about to learn how Neural Network works. Let me be clear about this: I don't want to use any built in functions. I want, for my understanding, build a own perceptron from beginning. For an example I built a perceptron according to this schematic:
All neurons have identity functions except neuron 6 and 7. Those two neurons should have logistic functions 1/(1+e^(-x)). Neuron 1 is connected directly to 6 and Neuron 2 is connected directly to the Output Neuron 8.
My problem is now how to implement this special situation. I get correct values in C when all neurons are identities. But as soon if I want to implement the logistic functions on 6 and 7, I get wrong values.
Is it possible to find a general algorithm to cover these special situations? (Different activating functions, hopping over layer, feedbacks).
I like it to have it as general and mathematical as possible. Workarounds should be avoided!
The code:
clear;
% Input Layer
o1 = 1;
o2 = 1;
o3 = 1;
% Hidden Layer
o4 = 0;
o5 = 0;
o6 = 0;
o7 = 0;
% Output Layer
o8 = 0;
% Init the Inputvektor:
O = [o1,o2,o3,o4,o5,o6,o7,o8];
C = 0;
% Weight Matrix:
% 1 2 3 4 5 6 7 8
W = [0,0,0,1,1,1,0,0; % 1
0,0,0,1,1,0,0,1 ; % 2
0,0,0,1,1,0,0,0 ; % 3
0,0,0,0,0,1,1,0 ; % 4
0,0,0,0,0,1,1,0 ; % 5
0,0,0,0,0,0,0,1 ; % 6
0,0,0,0,0,0,0,1 ; % 7
0,0,0,0,0,0,0,0]; % 8
% 3 Layer = 3 Iterations:
for c = 0:2
% Calculate Outputvektor (t+1):
A = O*W;
% Assing new Input vektor:
O = A;
% This seems not correct:
% I want to have Identityfunctions on all Neurons except on Neurons 6 and 7
% However I do not get the correct solutions at the end
O(6) = 1/(1+exp(-O(6)));
O(7) = 1/(1+exp(-O(7)));
% Result of the whole Net:
C = C + O
end