COMPUTER FUNDAMENTALS

EMERGING TRENDS IN COMPUTING

ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING

Question [CLICK ON ANY CHOICE TO KNOW THE RIGHT ANSWER]
What is the most important difference between batch gradient descent, small batch gradient descent, and stochastic gradient descent?
A
Gradient size
B
Gradient direction
C
Learning rate
D
Number of samples used
Explanation: 

Detailed explanation-1: -SGD can be used when the dataset is large. Batch Gradient Descent converges directly to minima. SGD converges faster for larger datasets. But, since in SGD we use only one example at a time, we cannot implement the vectorized implementation on it.

Detailed explanation-2: -When the batch is the size of one sample, the learning algorithm is called stochastic gradient descent. When the batch size is more than one sample and less than the size of the training dataset, the learning algorithm is called mini-batch gradient descent.

Detailed explanation-3: -So the connection is this: Both algorithms descend the gradient of a differentiable loss function. Gradient descent “descends” the gradient by introducing changes to parameters, whereas gradient boosting descends the gradient by introducing new models.

There is 1 question to complete.