This question can also be solved by studying Machine Learning Algorithms. Another thing by which the concept could be understood is through Logistic Regression.
Yes, own loss function can be defined based on the value of y. There are some conditions of y were its value ranges from 0 to 1, that is :
The Cost function for Logistic Regression is defined as:
Cost(hθ(x),y)= {
−log(hθ(x)) if y = 1
−log(1−hθ(x)) if y = 0
}
Cost(hθ(x),y) defines hθ(x) value is predicted value provided as input to get y as the output. Here, two scenarios are explained based on the value of Cost function.
In the case of y=1, the output approaches to 0 as hθ(x) approaches to 1. Conversely, the cost to pay (Cost function) grows to infinity as hθ(x) approaches to 0.
You can clearly see it in plot 2. below, left side. This is a desirable property: we want a bigger penalty as the algorithm predicts something far away from the actual value.
If the label is y=1 but the algorithm predicts hθ(x)=0, the outcome is completely wrong.
Conversely, the same intuition applies when y=0, depicted in the plot 2. below, right side. Bigger penalties when the label is y=0 but the algorithm predicts hθ(x)=1.
The cost function for logistic regression