MCQ IN COMPUTER SCIENCE & ENGINEERING

COMPUTER SCIENCE AND ENGINEERING

MACHINE LEARNING

Question [CLICK ON ANY CHOICE TO KNOW THE RIGHT ANSWER]
If I am using all features of my dataset and I achieve 100% accuracy on my training set, but  70% on validation set, what should I look out for?
A
Overfitting
B
Underfitting
C
Bestfitting
D
None of the above
Explanation: 

Detailed explanation-1: -14) If I am using all features of my dataset and I achieve 100% accuracy on my training set, but  70% on validation set, what should I look out for? If we’re achieving 100% training accuracy very easily, we need to check to verify if we’re overfitting our data.

Detailed explanation-2: -Does it mean that our model is 100% accurate and no one could do better than us? The answer is “NO”. A high accuracy measured on the training set is the result of Overfitting.

Detailed explanation-3: -There is a general rule when it comes to understanding accuracy scores: Over 90%-Very good. Between 70% and 90%-Good. Between 60% and 70%-OK.

Detailed explanation-4: -100% accuracy can clearly be achieved (also on the validation or unknown data) for some problem settings, but I guess those are rare cases.

Detailed explanation-5: -A statistical model that is complex enough (that has enough capacity) can perfectly fit to any learning dataset and obtain 100% accuracy on it. But by fitting perfectly to the training set, it will have poor performance on new data that are not seen during training (overfitting). Hence, it’s not what interests you.

There is 1 question to complete.