The purpose of the activation function is to introduce non-linearity into the output of a neuron.

A neural network is essentially just a linear regression model without an activation function. The activation function does the non-linear transformation to the input making it capable to learn and perform more complex tasks.

Alternatively, it can be explained like: without a non-linear function doesn’t matter how many hidden layers we attach in the neutral net all will behave in the same way.Neuron cannot learn with just a linear function attached to it, it requires a non-linear activation function to learn as per the difference w.r.t error.

Ex-

>>> input_vector = NP.random.rand(10)

>>> input_vector

array([ 0.61, 0.82, 0.95, 0. , 0.79, 0.55, 0.35, 0.27, 0.49, 0.15])

>>> output_vector = NP.tanh(input_vector)

>>> output_vector

array([ 0.55, 0.64, 0.37, 0. , 0.95, 0.73, 0.42, 0.67, 0.33, 0.88])

**Go through the insighful ****ai tutorial**** for more knowledge on this segment.**