The **objective function** is - as the name suggests - the objective of the optimization. It can be either something we want to minimize like cost function or maximize (like probability). In general - the function that measures how good is our current solution (usually by returning a real number).

**Loss function**: A penalty between prediction and label which is also equivalent to the regularization term. An example is the hinge loss function in SVM.

The loss is not equivalent to regularization, in any sense. The loss function is a penalty between a model and the truth. This can be a prediction of class conditional distribution vs true label, thus can also be a data distribution vs. empirical sample, and many more.

**Regularization** is a term, penalty, measure which is supposed to be a penalty for the too complex model. In ML, or generally in statistics when dealing with estimators, you always try to balance two sources of error - variance (coming from too complex models, overfitting) and bias (coming from too simple models, bad learning methods, underfitting). Regularization is a technique of penalizing high-variance models in the optimization process to get less overfitted one.

**A cost function** is just an objective function which one minimizes. It can be composed of some agglomeration of loss functions and regularizers.