I plan to use the Nguyen-Widrow Algorithm for an NN with multiple hidden layers. While researching, I found a lot of ambiguities and I wish to clarify them.
The following is pseudo-code for the Nguyen-Widrow Algorithm
Initialize all weight of hidden layers with random values For each hidden layer{ beta = 0.7 * Math.pow(hiddenNeurons, 1.0 / number of inputs); For each synapse{ For each weight{ Adjust weight by dividing by norm of weight for neuron and * multiplying by a beta value } } }
I just wanted to clarify whether the value of hiddenNeurons is the size of the particular hidden layer or the size of all the hidden layers within the network. I got mixed up by viewing various sources.
In other words, if I have a network (3-2-2-2-3) (index 0 is the input layer, index 4 is the output layer), would the value hiddenNeurons be:
NumberOfNeuronsInLayer(1) + NumberOfNeuronsInLayer(2) + NumberOfNeuronsInLaer(3)
Or just
NumberOfNeuronsInLayer(i), where i is the current Layer I am at
EDIT:
So, the hiddenNeurons value would be the size of the current hidden layer, and the input value would be the size of the previously hidden layer?