# Weight Initialisation

0 votes
1 view

I plan to use the Nguyen-Widrow Algorithm for an NN with multiple hidden layers. While researching, I found a lot of ambiguities and I wish to clarify them.

The following is pseudo-code for the Nguyen-Widrow Algorithm

Initialize all weight of hidden layers with random values For each hidden layer{ beta = 0.7 * Math.pow(hiddenNeurons, 1.0 / number of inputs); For each synapse{ For each weight{ Adjust weight by dividing by norm of weight for neuron and * multiplying by a beta value } } }

I just wanted to clarify whether the value of hiddenNeurons is the size of the particular hidden layer or the size of all the hidden layers within the network. I got mixed up by viewing various sources.

In other words, if I have a network (3-2-2-2-3) (index 0 is the input layer, index 4 is the output layer), would the value hiddenNeurons be:

NumberOfNeuronsInLayer(1) + NumberOfNeuronsInLayer(2) + NumberOfNeuronsInLaer(3)

Or just

NumberOfNeuronsInLayer(i), where i is the current Layer I am at

EDIT:

So, the hiddenNeurons value would be the size of the current hidden layer, and the input value would be the size of the previously hidden layer?

## 1 Answer

0 votes
by (108k points)

Generally, one hidden layer is enough in most cases, at least networks with larger numbers can be reduced to a 1 hidden layer. This is why we generally see the formula is often described in terms of a number of neurons in the hidden layer and (its) inputs. When you add a 2-d hidden layer, the formula applies to it recursively as well, where inputs are outputs of the previous layer.

As per your question, it initializes all the weight of all the hidden layers in the neural network.

And for each hidden layer, Calculate the beta value, 0.7 * Nth(number of neurons of the input layer) root of the number of neurons of the current layer for each synapse.

And for each weight, adjust the weight by dividing by norm of weight for neuron and multiplying by a beta value.

And yes the hiddenNeurons value would be the size of the current hidden layer, and the input value would be the size of the previously hidden layer.

0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer
0 votes
1 answer