One of the major aspects of **machine learning** is to **avoid overfitting**. We use **regularization** to avoid overfitting so that we get more accurate predictions.

**Regularization** is used to apply a penalty to increase the magnitude of parameter values in order to reduce overfitting. When you train a machine learning model, e.g., a logistic regression model, there you choose parameters that give you the best fit to the data. This means minimizing the error between the predicted value and the actual values.

If you have lots of parameters, but less amount of data, then the model might get adapt to all the parameters perfectly, which will cause overfitting.

For your case, you should add the minimized and the function that penalizes large values of the parameters. Most often the function is λΣθj2, which is λ times the sum of the squared parameter values θj2. The larger λ makes it less likely to the parameters will be increased in magnitude simply to adjust for small perturbations in the data. In your case rather than specifying λ, you specify C=1/λ.

Hope this answer helps.