MACHINE LEARNING

APPLICATION OF SUPERVISED LEARNING

DEEP LEARNING

Question [CLICK ON ANY CHOICE TO KNOW THE RIGHT ANSWER]
What are the commonly used gradient descent optimization functions?
A
stochastic gradient descent
B
Adadelta
C
Adagrad
D
momentum
E
RMSProp
Explanation: 

Detailed explanation-1: -Adam or Adaptive Moment Optimization algorithms combines the heuristics of both Momentum and RMSProp.

Detailed explanation-2: -Adam is slower to change its direction, and then much slower to get back to the minimum. However, rmsprop with momentum reaches much further before it changes direction (when both use the same learning rate).

There is 1 question to complete.