Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Python by (47.6k points)

What are all the differences between numpy.random.rand and numpy.random.randn?

From the docs, I know that the only difference among them is from the probabilistic distribution each number is drawn from, but the overall structure (dimension) and data type used (float) are the same. I have a hard time debugging a neural network because of believing this.

Specifically, I am trying to re-implement the Neural Network provided in the Neural Network and Deep Learning book by Michael Nielson. The original code can be found here. My implementation was the same as the original one, except that I defined and initialized weights and biases with numpy.random.rand in init function, rather than numpy.random.randn as in the original.

However, my code that use random.rand to initialize weights and biases doesn't work because the network won't learn and the weights and biases are will not change.

What difference(s) among two random functions cause this weirdness?

2 Answers

0 votes
by (106k points)

When you will look at the documentation of numpy you will see that the numpy.random.randn generates samples from the normal distribution, while numpy.random.rand from uniform (in range [0,1)).

The main reason in this is an activation function, especially in your case where you use the sigmoid function. The plot of the sigmoid looks like the following:

To know more about this you can have a look at the following video tutorial:-

0 votes
by (37.3k points)

First of all, try to understand the difference between ‘rand’ and ‘randn’.

‘rand’ will give you the values between the range of 0 and 1, like 0.24,0.47 etc, while ‘randn’ gives you all positive and negative values between the given range, with the center value as 0.

In your case, you were using rand, which was adjusting the weight with positive values only, due to which your network neurons were not learning different features effectively, which caused no change in your weight and biases.

That is why in the original code ‘randn’ was used so that we can get the weight in both positive and negative values, which will help my neurons to learn different features and perform effectively.

Related questions

0 votes
1 answer
asked Sep 26, 2019 in Python by Sammy (47.6k points)
0 votes
2 answers
0 votes
1 answer
asked Sep 27, 2019 in Python by Sammy (47.6k points)
0 votes
2 answers

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...