MCQ IN COMPUTER SCIENCE & ENGINEERING

COMPUTER SCIENCE AND ENGINEERING

MACHINE LEARNING

Question [CLICK ON ANY CHOICE TO KNOW THE RIGHT ANSWER]
A team of Data Scientists wants to use Amazon SageMaker training jobs to run two different versions of the same model in parallel to compare the long-term effectiveness of the different versions in reaching the related business outcome.How should the team deploy these two model versions with minimum management?
A
Create a Lambda function that preprocesses the incoming data, calls a single Amazon SageMaker endpoints for the two models, and finally returns the prediction.
B
Create an endpoint configuration with production variants for the two models with equal weights.
C
Create an endpoint configuration with production variants for the two models with a weight ratio of 90:10.
D
Create a Lambda function that downloads the models from Amazon S3 and calculates and returns the predictions of the two models.
Explanation: 

Detailed explanation-1: -What Amazon SageMaker option should the company use to train their ML models that reduces the management and automates the pipeline for future retraining? Create and train your XGBoost algorithm on your local laptop and then use an Amazon SageMaker endpoint to host the ML model.

Detailed explanation-2: -Step 2: Data Cleaning Next, this data flows to the cleaning step. To make sure the data paints a consistent picture that your pipeline can learn from, Cortex automatically detects and scrubs away outliers, missing values, duplicates, and other errors.

Detailed explanation-3: -Amazon SageMaker is a fully managed service to prepare data and build, train, and deploy machine learning (ML) models for any use case with fully managed infrastructure, tools, and workflows.

Detailed explanation-4: -Which of the following methods DOES NOT prevent a model from overfitting to the training set? Early stopping is a regularization technique, and can help reduce overfitting. Dropout is a regularization technique, and can help reduce overfitting. Data augmentation can help reduce overfitting by creating a larger dataset.

There is 1 question to complete.