Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
3 views
in Machine Learning by (55.6k points)

Can anyone explain the Naive Bayes algorithm in Machine Learning?

1 Answer

0 votes
by (119k points)

Naïve Bayes algorithms is a supervised learning algorithm for classification task based on applying Bayes’ theorem with a strong assumption that all the predictors are independent of each other. In short, the assumption states that the presence of a feature in a class is independent of any other feature in the same class. For example, a phone will be categorized as smart if it is having a touch screen, internet facility, good camera, etc. Even though all these features are dependent on each other, all these features contribute independently to find the probability that the phone is a smartphone.

In Bayesian classification, the important thing is to find the posterior probabilities defined as the probability of the class given some observed features, (Class| ).

P(Class|features)=P(Class)P(features|Class)P(features)

Here, P(Class | features) is the posterior probability of class.

P(Class) is the prior probability of class.

P(features |Class) is the likelihood which is the probability of predictor given class.

P(features) is the prior probability of predictor.

This Naïve Bayes returns the probabilities of that observation belonging to each class.

You can enroll in this Machine Learning Online Course by Intellipaat which has experienced instructors, best curriculum, and projects.

Also, watch this video on Machine Learning Algorithms:

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...