**What Is a Confusion Matrix?**

Confusion matrix is one of the easiest and most intuitive metrics used for finding the accuracy of a classification model, where the output can be of two or more categories. This is the most popular method used to evaluate logistic regression.

**If you are looking for Confusion Matrix in R, here’s a video from Intellipaat.**

**Here’s a list of all topics covered in this blog:**

- What is Confusion Matrix?
- Understanding various performance metrics
- Implementing Confusion Matrix in Python Sklearn – Breast Cancer

Without much delay, let’s get started.

**Understanding Various Performance Metrics**

Confusion matrix helps us describe the performance of a classification model. In order to build a confusion matrix, all we need to do is to create a table of actual values and predicted values.

Confusion matrix is quite simple, but the related terminologies can be a bit confusing. Alright, let us understand the terminologies related to confusion matrix with the help of an example.

Let us say, we have a data set with the data of all patients in a hospital. We built a logistic regression model to predict if a patient has cancer or not. There could be four possible outcomes. Let us look at all four.

**True Positive**

True positive is nothing but the case where the actual value as well as the predicted value are true. The patient has been diagnosed with cancer, and the model also predicted that the patient had cancer.

**False Negative**

In false negative, the actual value is true, but the predicted value is false, which means that the patient has cancer, but the model predicted that the patient did not have cancer.

**False Positive**

This is the case where the predicted value is true, but the actual value is false. Here, the model predicted that the patient had cancer, but in reality, the patient doesn’t have cancer. This is also known as **Type 1 Error**.

**True Negative**

This is the case where the actual value is false and the predicted value is also false. In other words, the patient is not diagnosed with cancer and our model predicted that the patient did not have cancer.

**Understanding Various Performance Metrics**

We will be taking the help of a confusion matrix given below in order to find various performance metrics.

Alright, let us start with accuracy:

**Accuracy or Classification Accuracy:**

**What:**In classification problems, ‘accuracy’ refers to the number of correct predictions made by the predictive model over the rest of the predictions.**How:**

**When to use:**When the target variable classes in the data are nearly balanced**When not to use:**When the target variables in the data are majority of one class

**Precision**

**What:**Here, ‘precision’ means on what proportion of all predictions that we made with our predictive model are actually true.**How:**

- It means, when our model predicts that a patient does not have cancer, it is correct 76 percent of the time.

**Recall or Sensitivity:**

**What:**‘Recall’ is nothing but the measure that tells what proportion of patients that actually had cancer were also predicted of having cancer.**It answers the question, “How sensitive the classifier is in detecting positive instances?”****How:**

- It means that 80 percent of all cancer patients are correctly predicted by the model to have cancer.

**Specificity:**

**What:**It answers question, “How specific or selective is the classifier in predicting positive instances?”**How:**

- A specificity of 0.61 means 61 percent of all patients that didn’t have cancer are predicted correctly.

**F1 Score**

**What:**This is nothing but the harmonic mean of precision and recall.**How:**

- F1 score is high, i.e., both precision and recall of the classifier indicate good results.

**Implementing Confusion Matrix in Python Sklearn – Breast Cancer**

**Dataset: **In this Confusion Matrix in Python example, the data set that we will be using is a subset of famous **Breast Cancer Wisconsin (Diagnostic) **data set. Some of the key points about this data set are mentioned below:

- Four real-valued measures of each cancer cell nucleus are taken into consideration here.
**Radius_mean**represents mean radius of cell nucleus**Texture_mean**represents mean texture of cell nucleus**Perimeter_mean**represents mean perimeter of cell nucleus**Area_mean**represents mean area of cell nucleus

- Based on these measures the diagnosed result is divided in two categories, malignant and benign.
**Diagnosis**column consists of two categories, malignant (M) and benign (B)

**Take a look at the dataset:**

**Step 1: Load the data set**

**Step 2: Take a glance at the data set**

**Step 3: Take a look at the shape of the data set**

**Step 4: Split the data into features (X) and target (y) label sets**

**Take a look at the feature set:**

**Take a look at the target set:**

**Step 5: Split the data into training and test sets importing scikit learn**

**Step 6: Create and train the model**

**Step 7: Predict the test set results**

**Step 8: Evaluate the model using a confusion matrix using sklearn**

**Note: **Here,

- True positive is 10.
- True negative is 7.
- False positive is 1.
- False negative is 2.

**Step 9: ****Evaluate the model using other performance metrics**

**Note:** A confusion matrix gives you complete picture of how the classifier is performing. It also allows you to compute various classification metrics and these metrics can guide your model selection.

**What Did We Learn So Far? **

In this tutorial, we have discussed use of confusion matrix in Machine Learning and its different terminologies. We talked about different performance metrics such as accuracy, precision, recall, and f1 score. At the end, we have implemented one confusion matrix example using sklearn. In the next module, we will increase the precision rate and the accuracy with the help of **ROC curve** and threshold adjustment. See you there.

Course Schedule

Name | Date | |
---|---|---|

Data Science Architect |
2021-04-24 2021-04-25 (Sat-Sun) Weekend batch |
View Details |

Data Science Architect |
2021-05-01 2021-05-02 (Sat-Sun) Weekend batch |
View Details |

Data Science Architect |
2021-05-08 2021-05-09 (Sat-Sun) Weekend batch |
View Details |