Back

Explore Courses Blog Tutorials Interview Questions
+3 votes
2 views
in Machine Learning by (4.2k points)

In least-squares models, the cost function is defined as the square of the difference between the predicted value and the actual value as a function of the input.

When we do logistic regression, we change the cost function to be a logarithmic function, instead of defining it to be the square of the difference between the sigmoid function (the output value) and the actual output.

Is it OK to change and define our own cost function to determine the parameters?

1 Answer

+2 votes
by (6.8k points)

This question can also be solved by studying Machine Learning Algorithms. Another thing by which the concept could be understood is through Logistic Regression.

Yes, own loss function can be defined based on the value of y. There are some conditions of y were its value ranges from 0 to 1, that is :

The Cost function for Logistic Regression is defined as:

Cost(hθ(x),y)= { 

                 −log(hθ(x))      if y = 1

                 −log(1−hθ(x))    if y = 0
               }

Cost(hθ(x),y) defines hθ(x) value is predicted value provided as input to get y as the output. Here, two scenarios are explained based on the value of Cost function. 

In the case of y=1, the output approaches to 0 as hθ(x) approaches to 1. Conversely, the cost to pay (Cost function) grows to infinity as hθ(x) approaches to 0. 

You can clearly see it in plot 2. below, left side. This is a desirable property: we want a bigger penalty as the algorithm predicts something far away from the actual value.

If the label is y=1 but the algorithm predicts hθ(x)=0, the outcome is completely wrong.

Conversely, the same intuition applies when y=0, depicted in the plot 2. below, right side. Bigger penalties when the label is y=0 but the algorithm predicts hθ(x)=1.

image

The cost function for logistic regression

Browse Categories

...