Questão 1
Questão
Which of the following descriptions accurately describes Azure Machine Learning?
Responda
-
A Python library that you can use as an alternative to common machine learning frameworks like Scikit-Learn, PyTorch, and Tensorflow.
-
An application for Microsoft Windows that enables you to create machine learning models by using a drag and drop interface
-
A cloud-based platform for operating machine learning solutions at scale.
Questão 2
Questão
Which edition of Azure Machine Learning workspace should you provision if you only plan to use the graphical Designer tool to train machine learning models?
Questão 3
Questão
You need a cloud-based development environment that you can use to run Jupyter notebooks that are stored in your workspace. The notebooks must remain in your workspace at all times.
What should you do?
Responda
-
Install Visual Studio Code on your local computer.
-
Create a Compute Instance compute target in your workspace.
-
Create a Training Cluster compute target in your workspace.
Questão 4
Questão
You plan to use the Workspace.from_config() method to connect to your Azure Machine Learning workspace from a Python environment on your local workstation. You have already used pip to install the
azureml-sdk package.
What else should you do?
Responda
-
Run pip install azureml-sdk['notebooks'] to install the notebooks extra
-
Download the config.json file for your workspace to the folder containing your local Python code files
-
Create a Compute Instance compute target in your workspace.
Questão 5
Questão
You need to ingest data from a CSV file into a pipeline in Designer. What should you do?
Responda
-
Create a Dataset by uploading the file, and drag the dataset to the canvas
-
Add a Convert to CSV module to the canvas.
-
Add an Enter Data Manually module to the canvas.
Questão 6
Questão
You have created a pipeline that includes multiple modules to define a dataflow and train a model.
Now you want to run the pipeline.
What must you do first?
Responda
-
Add comments to each of the modules on the pipeline canvas.
-
Rename the pipeline to include the date and time.
-
Create a Training Cluster in your workspace, and select it as the compute target for the pipeline.
Questão 7
Questão
You have created and run a pipeline to train a model using the Designer tool. Now you want to publish it as a real-time service.
What must you do first?
Responda
-
Create an inference pipeline from your training pipeline.
-
Clone the training pipeline with a different name.
-
Change the compute target of the training pipeline to an Azure Kubernetes Services (AKS) cluster.
Questão 8
Questão
You have published a pipeline as a real-time service on an Azure Kubernetes Services (AKS) cluster.
An application developer plans to call the service from a REST-based client.
What information does the application developer require?
Responda
-
The name of the inference pipeline in designer
-
The endoint URL and key for the published service.
-
The name of the AKS compute target in the workspace.
Questão 9
Questão
You are using the Azure Machine Learning Python SDK to write code for an experiment. You need to record metrics from each run of the experiment, and be able to retrieve them easily from
each run.
What should you do?
Responda
-
Add print statements to the experiment code to print the metrics.
-
Use the log methods of the Run class to record named metrics.
-
Save the experiment data in the outputs folder
Questão 10
Questão
You want to run a script as an experiment.
You have already created a RunConfig object to define the Python runtime context for the experiment.
What other object should you create to associate the script with the runtime context?
Questão 11
Questão
You have written a script that uses the Scikit-Learn framework to train a model.
Which framework-specific estimator should you use to run the script as an experiment?
Responda
-
PyTorch
-
Tensorflow
-
SKLearn
Questão 12
Questão
You have run an experiment to train a model.
You want the model to be stored in the workspace, and available to other experiments and published
services.
What should you do?
Responda
-
Register the model in the workspace.
-
Save the model as a file in a Compute Instance.
-
Save the experiment script as a notebook.
Questão 13
Questão
You are using the Azure Machine Learning Python SDK, and you need to create a reference to the default
datastore in your workspace.
You have written the following code:
--
from azureml.core import Workspace
#Get the workspace
ws = Workspace.from_config()
#Get the default datastore
--
Which line of code should you add?
Responda
-
default_ds = ws.get_default_datastore()
-
default_ds = ws.set_default_datastore()
-
default_ds = ws.datastores.get('default')
Questão 14
Questão
You have uploaded some data files to a folder in a blob container, and registered the blob container as a
datastore in your Azure Machine Learning workspace.
You want to run a script as an experiment that loads the data files and trains a model.
What should you do?
Responda
-
Save the experiment script in the same blob folder as the data files.
-
Create a data reference for the datastore location and pass it to the script as a parameter.
-
Create global variables for the Azure Storage account name and key in the experiment script.
Questão 15
Questão
You have a CSV file containing structured data that you want to use to train a model.
You upload the file to a folder in an Azure Storage blob container, for which a datastore is defined in your
workspace. Now you want to create a dataset for the data so that it can be easily used as a Pandas
dataframe.
Which kind of dataset should you create?
Responda
-
A file dataset
-
A tabular dataset
Questão 16
Questão
You have registered a dataset in your workspace.
You want to use the dataset in an experiment script that is run using an estimator.
What should you do?
Responda
-
Pass the dataset as a named input to the estimator.
-
Create a data reference for the datastore location where the dataset data is stored, and pass it to the
script as a parameter.
-
Use the dataset to save the data as a CSV file in the experiment script folder before running the
experiment.
Questão 17
Questão
You are using the Azure Machine Learning Python SDK to run experiments. You need to create an environment from a Conda configuration (.yml) file.
Which method of the Environment class should you use?
Questão 18
Questão
You have registered an environment in your workspace, and retrieved it using the following code:
--
from azureml.core import Environment, Estimator
env = Environment.get(workspace=ws, name='my_environment')
estimator = Estimator(entry_script='training_script.py', ...)
--
You want to use the environment as the Python context for an experiment script that is run using an estimator.
Which property of the estimator should you set to assign the environment?
Questão 19
Questão
You need to create a compute target for training experiments that require a graphical processing unit
(GPU).
You want to be able to scale the compute so that multiple nodes are started automatically as required.
Which kind of compute target should you create.
Responda
-
Compute Instance
-
Training Cluster
-
Inference Cluster
Questão 20
Questão
You are using an estimator to run an experiment, and you want to run it on a compute instance named
training-cluster-1.
Which property of the estimator should you set to run the experiment on training-cluster-1?
Responda
-
compute_target = 'training-cluster-1
-
environment_definition = 'training-cluster-1'
-
source_directory = 'training-cluster-1'
Questão 21
Questão
You are creating a pipeline that includes a step to train a model using an estimator.
Which kind of step should you define in the pipeline for this task.
Responda
-
DatabricksStep
-
PythonScriptStep
-
EstimatorStep
Questão 22
Questão
You are creating a pipeline that includes two steps.
Step 1 preprocesses some data, and step 2 uses the preprocessed data to train a model.
What type of object should you use to pass data from step 1 to step 2 and create a dependency between
these steps?
Responda
-
Datastore
-
PipelineData
-
Data Reference
Questão 23
Questão
You have used the Python SDK for Azure Machine Learning to create a pipeline that trains a model.
What do you need to do to before publishing the pipeline?
Responda
-
Rename the pipeline to pipeline_name-production.
-
Run the pipeline as an experiment.
-
Create an inference cluster compute target.
Questão 24
Questão
You have published a pipeline that you want to run every week.
You plan to use the Schedule.create method to create the schedule.
What kind of object must you create first to configure how frequently the pipeline runs?
Responda
-
Datastore
-
PipelineParameter
-
ScheduleRecurrance
Questão 25
Questão
You have trained a model using the Python SDK for Azure Machine Learning.
You want to deploy the model as a containerized real-time service with high scalability and security.
What kind of compute should you create to host the service?
Responda
-
An Azure Kubernetes Services (AKS) inferencing cluster.
-
A compute instance with GPUs.
-
A training cluster with multiple nodes.
Questão 26
Questão
You are deploying a model as a real-time inferencing service.
What functions must the entry script for the service include?
Questão 27
Questão
You are creating a batch inferencing pipeline that you want to use to predict new values for a large
volume of data files?
You want the pipeline to run the scoring script on multiple nodes and collate the results.
What kind of step should you include in the pipeline?
Responda
-
PythonScriptStep
-
ParallelRunStep
-
AdlaStep
Questão 28
Questão
You have configured the step in your batch inferencing pipeline with an output_action="append_row"
property.
In which file should you look for the batch inferencing results?
Responda
-
parallel_run_step.txt
-
output.txt
-
stdoutlogs.txt
Questão 29
Questão
You plan to use hyperparameter tuning to find optimal discrete values for a set of hyperparameters.
You want to try every possible combination of a set of specified discrete values.
Which kind of sampling should you use?
Responda
-
Grid Sampling
-
Random Sampling
-
Bayesian Sampling
Questão 30
Questão
You are using hyper parameter tuning to train an optimal model.
Your training script calculates the area under the curve (AUC) metric for the trained model like this:
--
y_scores = model.predict_proba(X_test)
auc = roc_auc_score(y_test,y_scores[:,1])
--
You define the hyperdrive configuration like this:
--
hyperdrive = HyperDriveConfig(estimator=sklearn_estimator,
hyperparameter_sampling=grid_sampling,
policy=None,
primary_metric_name='AUC',
primary_metric_goal=PrimaryMetricGoal.MAXIMIZE,
max_total_runs=6,
max_concurrent_runs=4)
--
Which code should you add to the training script?
Responda
-
run.log('Accuracy', np.float(auc))
-
print(auc)
-
run.log('AUC', np.float(auc))
Questão 31
Questão
You are using automated machine learning to train a model that predicts the species of an iris based on
its petal and sepal measurements.
Which kind of task should you specify for automated machine learning?
Responda
-
Regression
-
Classification
-
Forecasting
Questão 32
Questão
You have submitted an automated machine learning run using the Python SDk for Azure Machine
Learning.
When the run completes, which method of the run object should you use to retrieve the best model?
Responda
-
load_model()
-
get_output()
-
get_metrics()
Questão 33
Questão
You have trained a model, and you want to quantify the influence of each feature on a specific individual
prediction.
What kind of feature importance should you examine?
Questão 34
Questão
You are using automated machine learning, and you want to determine the influence of features on the
predictions made by the best model produced by the automated machine learning experiment.
What must you do when configuring the automated machine learning experiment?
Questão 35
Questão
You want to create an explainer that applies the most appropriate SHAP model explanation algorithm
based on the type of model.
What kind of explainer should you create?
Questão 36
Questão
You want to include model explanations in the logged details of your training experiment.
What must you do in your training script?
Responda
-
Use the Run.log_table method to log feature importance for each feature.
-
Use the ExplanationClient.upload_model_explanation method to upload the explanation created by an
Explainer.
-
Save the the explanation created by an Explainer in the ./outputs folder.
Questão 37
Questão
You have deployed a model as a real-time inferencing service in an Azure Kubernetes Service (AKS)
cluster.
What must you do to capture and analyze telemetry for this service?
Responda
-
Enable application insights.
-
Implement inference-time model interpretability.
-
Move the AKS cluster to the same region as the Azure Machine Learning workspace.
Questão 38
Questão
You want to include custom information in the telemetry for your inferencing service, and analyze it using
Application Insights.
What must you do in your service's entry script?
Responda
-
Use the Run.log method to log the custom metrics
-
Save the custom metrics in the ./outputs folder.
-
Use a print statement to write the metrics in the STDOUT log.
Questão 39
Questão
You have trained a model using a dataset containing data that was collected last year. As this year
progresses, you will collect new data.
You want to track any changing data trends that might affect the performance of the model.
What should you do?
Responda
-
Collect the new data in a new version of the existing training dataset, and profile both datasets.
-
Collect the new data in a separate dataset and create a Data Drift Monitor with the training dataset as
a baseline and the new dataset as a target.
-
Replace the training dataset with a new dataset that contains both the original training data and the
new data
Questão 40
Questão
You are creating a data drift monitor.
You want to automatically notify the data science team if a significant change in data distribution is
detected.
What must you do?
Responda
-
Define an AlertConfiguration and set a drift_threshold value
-
Set the latency of the data drift monitor to allow time for data scientists to review the new data.
-
Register the training dataset with the model, including the email address of the data science team as a
tag