MCQ IN COMPUTER SCIENCE & ENGINEERING

COMPUTER SCIENCE AND ENGINEERING

MACHINE LEARNING

Question [CLICK ON ANY CHOICE TO KNOW THE RIGHT ANSWER]
Model evaluation metric is calculated as True Positives / (True Positives + False Negatives). What is the name of this metric?
A
Accuracy
B
Precision
C
Recall
D
F1 Score
Explanation: 

Detailed explanation-1: -Recall is a metric that quantifies the number of correct positive predictions made out of all positive predictions that could have been made. Unlike precision that only comments on the correct positive predictions out of all positive predictions, recall provides an indication of missed positive predictions.

Detailed explanation-2: -False Positive: An instance for which predicted value is positive but actual value is negative. False Negative: An instance for which predicted value is negative but actual value is positive.

Detailed explanation-3: -Recall and True Positive Rate (TPR) are exactly the same. So the difference is in the precision and the false positive rate. The main difference between these two types of metrics is that precision denominator contains the False positives while false positive rate denominator contains the true negatives.

Detailed explanation-4: -Metrics like accuracy, precision, recall are good ways to evaluate classification models for balanced datasets, but if the data is imbalanced then other methods like ROC/AUC perform better in evaluating the model performance.

There is 1 question to complete.