MCQ IN COMPUTER SCIENCE & ENGINEERING

COMPUTER SCIENCE AND ENGINEERING

MACHINE LEARNING

Question [CLICK ON ANY CHOICE TO KNOW THE RIGHT ANSWER]
Which inference type should you use if you have requirements for latency and predictions that are based on dynamic features?
A
Online inference
B
Batch inference
C
Either A or B
D
None of the above
Explanation: 

Detailed explanation-1: -If predictions need to served on an individual basis and within the time of a single web request, online inference is the way to go.

Detailed explanation-2: -offline inference, meaning that you make all possible predictions in a batch, using a MapReduce or something similar. You then write the predictions to an SSTable or Bigtable, and then feed these to a cache/lookup table. online inference, meaning that you predict on demand, using a server.

Detailed explanation-3: -What is Latency in Machine Learning (ML)? Latency is a measurement in Machine Learning to determine the performance of various models for a specific application. Latency refers to the time taken to process one unit of data provided only one unit of data is processed at a time.

Detailed explanation-4: -Batch inference: An asynchronous process that bases its predictions on a batch of observations. The predictions are stored as files or in a database for end users or business applications. Real-time (or interactive) inference: Frees the model to make predictions at any time and trigger an immediate response.

Detailed explanation-5: -Dynamic (online) inference means making predictions on demand. That is, in online inference, we put the trained model on a server and issue inference requests as needed.

There is 1 question to complete.