MACHINE LEARNING

APPLICATION OF SUPERVISED LEARNING

MACHINE LEARNING PIPELINE

Question [CLICK ON ANY CHOICE TO KNOW THE RIGHT ANSWER]
Which inference type should you use if you have requirements for latency and predictions that are based on dynamic features?
A
Online inference
B
Batch inference
C
Either A or B
D
None of the above
Explanation: 

Detailed explanation-1: -Batch inference affords data scientists several benefits. Since latency requirements are typically on the order of hours or days, latency is often not a concern. This allows data scientists to use tools like Spark to generate predictions on large batches.

Detailed explanation-2: -Online Predictions Using a REST API, deploy your model to make it available for prediction requests. Online prediction is synchronous (real-time), which means it will immediately produce a forecast, however, each API call can only accept one prediction request.

Detailed explanation-3: -Batch inference: An asynchronous process that bases its predictions on a batch of observations. The predictions are stored as files or in a database for end users or business applications. Real-time (or interactive) inference: Frees the model to make predictions at any time and trigger an immediate response.

There is 1 question to complete.