MCQ IN COMPUTER SCIENCE & ENGINEERING

COMPUTER SCIENCE AND ENGINEERING

MACHINE LEARNING

Question [CLICK ON ANY CHOICE TO KNOW THE RIGHT ANSWER]
The margin of a linear classifier as the width that the boundary could be increased by
A
before hitting a datapoint.
B
nearer to data point
C
after data point
D
middle of data poin
Explanation: 

Detailed explanation-1: -Maximal Margin Classifier This is a classifier that is farthest from the training observations. By computing the perpendicular distance between the hyperplane to the training observations. The shortest such distance is called the minimal distance between the hyperplane and the observation, and it is called margin.

Detailed explanation-2: -In the SVM algorithm, we are looking to maximize the margin between the data points and the hyperplane. The loss function that helps maximize the margin is hinge loss. The cost is 0 if the predicted value and the actual value are of the same sign. If they are not, we then calculate the loss value.

Detailed explanation-3: -The angle between these nearest points and the hyperplane is 90°. These points are referred to as “Support Vectors”. Support vectors are shown by circles in the diagram below. This classifier would choose the hyperplane with the maximum margin which is why it is known as Maximal – Margin Classifier.

Detailed explanation-4: -SVMs construct a maximum margin separator-a decision boundary with the largest possible distance to example points. This helps them generalize well. 2. SVMs create a linear separating hyperplane, but they have the ability to embed the data into a higher-dimensional space, using the so-called kernel trick.

There is 1 question to complete.