Classification in Machine Learning

Supervised learning techniques can be broadly divided into regression and classification algorithms. In this session, we will be focusing on classification in Machine Learning. We’ll go through the below example to understand classification in a better way.
ML10
Let’s say, you live in a gated housing society and your society has separate dustbins for different types of waste: one for paper waste, one for plastic waste, and so on. What you are basically doing over here is classifying the waste into different categories. So, classification is the process of assigning a ‘class label’ to a particular item. In the above example, we are assigning the labels ‘paper’, ‘metal’, ‘plastic’, and so on to different types of waste.

For the best of career growth, check out Intellipaat’s Machine Learning Course and get certified.

Watch this Introduction to Machine Learning Video

Classification Algorithms in Machine Learning

Now that we know what exactly classification is, we will be going through the classification algorithms in Machine Learning:

  • Logistic Regression
  • Decision Tree
  • Random Forest
  • Naive Bayes

Logistic Regression

Logistic regression is a binary classification algorithm which gives out the probability for something to be true or false.
Let’s take this example to understand logistic regression:
ML12
Here, we have two independent variables ‘Temperature’ and ‘Humidity’, while the dependent variable is ‘Rain’. We are trying to determine the probability of raining, on the basis of different values for ‘Temperature’ and ‘Humidity’.
Logistic regression is an estimation of the logit function and the logit function is simply a log of odds in favor of the event.
ML13'

Go through this Artificial Intelligence Interview Questions And Answers to excel in your Artificial Intelligence Interview.

Decision Tree

Decision tree, as the name states, is a tree-based classifier in Machine Learning. You can consider it to be an upside-down tree, where each node splits into its children based on a condition. Let’s take this example to understand the concept of decision trees:
ML14
Here, we are building a decision tree to find out if a person is fit or not. Based on a series of test conditions, we finally arrive at the leaf nodes and classify the person to be fit or unfit.

Random Forest

Random Forest is an ensemble technique, which is basically a collection of multiple decision trees.
ML15
Here, we generate multiple subsets of our original dataset and build decision trees on each of these subsets. As we see in the above picture, if we generate ‘x’ subsets, then our random forest algorithm will have results from ‘x’ decision trees. The final solution would be the average vote of all these results.

Naive Bayes

Naive Bayes is a probabilistic classifier in Machine Learning which is built on the principle of Bayes theorem. Naive Bayes classifier makes an assumption that one particular feature in a class is unrelated to any other feature and that is why it is known as naive.
ML16
The below picture denotes the Bayes theorem:
ML20
So, these are some most commonly used algorithms for classification in Machine Learning.

If you have any doubts or queries related to Data Science, do post on Machine Learning Community.

Leave a Reply

Your email address will not be published. Required fields are marked *