APPLICATION OF SUPERVISED LEARNING
DEEP LEARNING
Question
[CLICK ON ANY CHOICE TO KNOW THE RIGHT ANSWER]
|
|
Binary cross entropy
|
|
Categorical cross entropy
|
|
Root mean squared error
|
|
Sum of squared error
|
Detailed explanation-1: -Low log loss values equate to high accuracy values. Binary cross entropy is equal to-1*log(likelihood). Here Yi represents the actual class and log(p(yi)is the probability of that class.
Detailed explanation-2: -Binary cross-entropy is another special case of cross-entropy-used if our target is either 0 or 1. In a neural network, you typically achieve this prediction by sigmoid activation. The target is not a probability vector.
Detailed explanation-3: -What is Binary Cross Entropy Or Logs Loss? Binary cross entropy compares each of the predicted probabilities to actual class output which can be either 0 or 1. It then calculates the score that penalizes the probabilities based on the distance from the expected value. That means how close or far from the actual value.