MCQ IN COMPUTER SCIENCE & ENGINEERING

COMPUTER SCIENCE AND ENGINEERING

MACHINE LEARNING

Question [CLICK ON ANY CHOICE TO KNOW THE RIGHT ANSWER]
When updating your weights using the loss function, what dictates how much change the weights should have?
A
Batch size
B
Learning rate
C
Initial weights
D
Bias term
Explanation: 

Detailed explanation-1: -Learning rate is a hyper-parameter that controls how much we are adjusting the weights of our network with respect the loss gradient.

Detailed explanation-2: -Recall that in order for a neural networks to learn, weights associated with neuron connections must be updated after forward passes of data through the network. These weights are adjusted to help reconcile the differences between the actual and predicted outcomes for subsequent forward passes.

Detailed explanation-3: -This is done by making small adjustments in the weights to reduce the difference between the actual and desired outputs of the perceptron. The initial weights are randomly assigned, usually in the range [-0.5, 0.5], and then updated to obtain the output consistent with the training examples.

Detailed explanation-4: -The weights are adjusted according to the weighted sum of the inputs (the net), whereas in perceptron the sign of the weighted sum was useful for determining the output as the threshold was set to 0, -1, or +1. This makes it ADALINE different from the normal perceptron.

There is 1 question to complete.