I'm creating a neural network using the backpropagation technique for learning.
I understand we need to find the derivative of the activation function used. I'm using the standard sigmoid function
f(x) = 1 / (1 + e^(-x))
and I've seen that its derivative is
dy/dx = f(x)' = f(x) * (1 - f(x))
This may be a daft question, but does this mean that we have to pass x through the sigmoid function twice during the equation, so it would expand to
dy/dx = f(x)' = 1 / (1 + e^(-x)) * (1 - (1 / (1 + e^(-x))))
or is it simply a matter of taking the already calculated output of f(x), which is the output of the neuron, and replace that value for f(x)?