MACHINE LEARNING

APPLICATION OF SUPERVISED LEARNING

NEURAL NETWORK

Question [CLICK ON ANY CHOICE TO KNOW THE RIGHT ANSWER]
L1 regularisation favours few non zero weights where as L2 regularisation favours small values around zero.
A
True
B
False
C
Either A or B
D
None of the above
Explanation: 

Detailed explanation-1: -Regularization limits the magnitude of model weights by adding a penalty for weights to the model error function. L1 regularization uses the sum of the absolute values of the weights. L2 regularization uses the sum of the squared values of the weights.

Detailed explanation-2: -You can think of the derivative of L1 as a force that subtracts some constant from the weight every time. However, thanks to absolute values, L1 has a discontinuity at 0, which causes subtraction results that cross 0 to become zeroed out.

Detailed explanation-3: -The regularization term is defined as the Euclidean Norm (or L2 norm) of the weight matrices, which is the sum over all squared weight values of a weight matrix. The regularization term is weighted by the scalar alpha divided by two and added to the regular loss function that is chosen for the current task.

There is 1 question to complete.