Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in AI and Deep Learning by (50.2k points)

I understand the role of the bias node in neural nets, and why it is important for shifting the activation function in small networks. My question is this: is the bias still important in very large networks (more specifically, a convolutional neural network for image recognition using the ReLu activation function, 3 convolutional layers, 2 hidden layers, and over 100,000 connections), or does its affect get lost by the sheer number of activations occurring?

The reason I ask is because in the past I have built networks in which I have forgotten to implement a bias node, however upon adding one have seen a negligible difference in performance. Could this have been down to chance, in that the specifit data-set did not require a bias? Do I need to initialise the bias with a larger value in large networks? Any other advice would be much appreciated.

1 Answer

0 votes
by (107k points)

The bias node in an NN (neural network) is a node that is always 'on'. That means, its value is set to 1 without regard for the data in a given pattern. It is analogous to the intercept in a regression model and serves the same function. If the NN does not have a bias node in a given layer, it will not be able to produce output in the next layer that differs from 0 when the feature values are 0.

If your inputs and the outputs have the same range, let say from -1 to +1, then the bias term will probably not be useful.

You could have a look at the weigh of the bias node in the experiment you mention. Either it is very low, and it probably means the inputs and outputs are centered already. Or it is significant, and I would bet that the variance of the other weighs is reduced, leading to a more stable (and less prone to overfitting) neural net.

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...