APPLICATION OF SUPERVISED LEARNING
MACHINE LEARNING PIPELINE
Question
[CLICK ON ANY CHOICE TO KNOW THE RIGHT ANSWER]
|
|
Online inference
|
|
Batch inference
|
|
Either A or B
|
|
None of the above
|
Detailed explanation-1: -Batch inference affords data scientists several benefits. Since latency requirements are typically on the order of hours or days, latency is often not a concern. This allows data scientists to use tools like Spark to generate predictions on large batches.
Detailed explanation-2: -Online Predictions Using a REST API, deploy your model to make it available for prediction requests. Online prediction is synchronous (real-time), which means it will immediately produce a forecast, however, each API call can only accept one prediction request.
Detailed explanation-3: -Batch inference: An asynchronous process that bases its predictions on a batch of observations. The predictions are stored as files or in a database for end users or business applications. Real-time (or interactive) inference: Frees the model to make predictions at any time and trigger an immediate response.