Machine Learning Interview Questions – Q5 – Define precision and recall

Machine learning interview questions is a series I will periodically post on.  The idea was inspired by the post 41 Essential Machine Learning Interview Questions at Springboard.  I will take each question posted there and provide an answer in my own words.  Whether that expands upon their solution or is simply another perspective on how to phrase the solution, I hope you will come away with a better understanding of the topic at hand.

To see other posts in this series visit the Machine Learning Interview Questions category.

Q5 – Define precision and recall.

I will define precision and recall in the context of binary classification.  Similar to Q4, the terms positive and negative will represent different classes.

 

Recall

Recall is also known as sensitivity or the true positive rate.  We briefly touched on this concept in Q4 – Explain how a ROC curve works, where the ROC curve is plotting recall vs. inverse recall (the false positive rate).

Recall is the number of positives that your model predicts compared to the actual number of positives in your data.  

Recall = True Positives / (True Positives + False Negatives)

In the context of binary classification, false negatives are data points that our model predicted to be negative but in reality are of the positive class.  In other words, recall is the number of true positives we predicted divided by the total number of elements that in reality are positive.

Recall = Sum(True Positives) / Sum(All data that is positive in reality)

Recall is a measure of completeness.  High recall means that our model classified most or all of the possible positive elements as positive.  A recall score of 1.0  means that every item from that class was labeled as belonging to that class.  However, having just the recall score, you do not know how many other items were incorrectly labeled (ie. did your model just say everything is of class positive).

Precision

Precision is also called the positive predictive value.  It is a number of correct positives your model predicts compared to the total number of positives it predicts.

Precision = True Positives / (True Positives + False Positives)

In this context of binary classification, false positives are data points our model predicted to be positive but in reality are negative and therefore something our model incorrectly predicted.  In other words, precision is the number of positive elements predicted properly divided by the total number of positive elements predicted.

Precision = Sum(True Positives) / Sum(All elements predicted to be positive)

Precision is a measure of exactness, quality, or accuracy.  High precision means that more or all of the positive results you predicted are correct.  A precision score of 1.0 means that every item labeled positive, does indeed belong to the positive class.  A precision score by itself though does not say anything about how many items of that class were not labeled (ie. our model accurately labeled only a few points but missed a lot of other points by not classifying them as positive).

 

F-Measure

Precision and recall are often used together because they complement each other in how they describe the effectiveness of a model.  The F-measure is a score that combines these two as the weighted harmonic mean of precision and recall.

F-Measure = 2 * (Precision * Recall) / (Precision + Recall)

 

Summary

Recall is how complete our classification is.  It is how many of the positive data points that exist in our data set did we correctly predict as positive.  

Precision is how pure or how precise our predictiosn are.  It is how many elements did we predict to be positive that were actually positive.