Classification in Machine Learning
Supervised learning techniques can be broadly divided into regression and classification algorithms. In this session, we will be focusing on classification in Machine Learning. We’ll go through the below example to understand classification in a better way.
Let’s say, you live in a gated housing society and your society has separate dustbins for different types of waste: one for paper waste, one for plastic waste, and so on. What you are doing over here is classifying the waste into different categories. So, classification is the process of assigning a ‘class label’ to a particular item. In the above example, we are assigning the labels ‘paper’, ‘metal’, ‘plastic’, and so on to different types of waste.
Watch this Machine Learning Tutorial
For the best of career growth, check out Intellipaat’s Machine Learning Course and get certified.
Classification Algorithms in Machine Learning
Now that we know what exactly classification is, we will be going through the Machine Learning classification algorithms:
- Logistic Regression
- Decision Tree
- Random Forest
- Naive Bayes
Master the classification of Machine Learning. Enroll in Machine Learning course in Bangalore.
Logistic Regression
Logistic regression is a binary classification algorithm that gives out the probability for something to be true or false.
Let’s take this example to understand logistic regression:
Here, we have two independent variables ‘Temperature’ and ‘Humidity’, while the dependent variable is ‘Rain’. We are trying to determine the probability of raining, based on different values for ‘Temperature’ and ‘Humidity’.
Logistic regression is an estimation of the logit function and the logit function is simply a log of odds in favor of the event.
Go through these Artificial Intelligence Interview Questions And Answers to excel in your Artificial Intelligence Interview.
Watch this Logistic Regression Tutorial
Decision Tree
Decision tree, as the name states, is a tree-based classifier in Machine Learning. You can consider it to be an upside-down tree, where each node splits into its children based on a condition. Let’s take this example to understand the concept of decision trees:
Here, we are building a decision tree to find out if a person is fit or not. Based on a series of test conditions, we finally arrive at the leaf nodes and classify the person to be fit or unfit.
Enroll in this Online M.Tech in AI and ML by IIT Jammu to enhance your career!
Random Forest
Random Forest is an ensemble technique, which is a collection of multiple decision trees.
Here, we generate multiple subsets of our original dataset and build decision trees on each of these subsets. As we see in the above picture, if we generate ‘x’ subsets, then our random forest algorithm will have results from ‘x’ decision trees. The final solution would be the average vote of all these results.
Naive Bayes
Naive Bayes is a probabilistic classifier in Machine Learning which is built on the principle of Bayes theorem. Naive Bayes classifier assumes that one particular feature in a class is unrelated to any other feature and that is why it is known as naive.
The below picture denotes the Bayes theorem:
So, these are some most commonly used algorithms for classification in Machine Learning.