MCQ IN COMPUTER SCIENCE & ENGINEERING

COMPUTER SCIENCE AND ENGINEERING

MACHINE LEARNING

Question [CLICK ON ANY CHOICE TO KNOW THE RIGHT ANSWER]
Measures the percentage of the actual positive class that is correctly predicted. It is the capture rate.
A
Recall
B
Specificity
C
F1 Score
D
None of the above
Explanation: 

Detailed explanation-1: -Sensitivity, also known as the true positive rate (TPR), is the same as recall . Hence, it measures the proportion of positive class that is correctly predicted as positive. Specificity is similar to sensitivity but focused on negative class.

Detailed explanation-2: -Precision quantifies the number of positive class predictions that actually belong to the positive class. Recall quantifies the number of positive class predictions made out of all positive examples in the dataset. F-Measure provides a single score that balances both the concerns of precision and recall in one number.

Detailed explanation-3: -Recall and True Positive Rate (TPR) are exactly the same. So the difference is in the precision and the false positive rate. The main difference between these two types of metrics is that precision denominator contains the False positives while false positive rate denominator contains the true negatives.

Detailed explanation-4: -Recall: the ability of a classification model to identify all data points in a relevant class. Precision: the ability of a classification model to return only the data points in a class. F1 score: a single metric that combines recall and precision using the harmonic mean.

Detailed explanation-5: -Accuracy tells you how many times the ML model was correct overall. Precision is how good the model is at predicting a specific category. Recall tells you how many times the model was able to detect a specific category.

There is 1 question to complete.