Knowledge Check - Post

Descripción

Test sobre Knowledge Check - Post, creado por Mohammed Arif Mazumder el 29/04/2020.
Mohammed Arif Mazumder
Test por Mohammed Arif Mazumder, actualizado hace más de 1 año
Mohammed Arif Mazumder
Creado por Mohammed Arif Mazumder hace más de 4 años
6928
0

Resumen del Recurso

Pregunta 1

Pregunta
Which of the following descriptions accurately describes Azure Machine Learning?
Respuesta
  • A Python library that you can use as an alternative to common machine learning frameworks like Scikit-Learn, PyTorch, and Tensorflow.
  • An application for Microsoft Windows that enables you to create machine learning models by using a drag and drop interface
  • A cloud-based platform for operating machine learning solutions at scale.

Pregunta 2

Pregunta
Which edition of Azure Machine Learning workspace should you provision if you only plan to use the graphical Designer tool to train machine learning models?
Respuesta
  • Enterprise
  • Basic

Pregunta 3

Pregunta
You need a cloud-based development environment that you can use to run Jupyter notebooks that are stored in your workspace. The notebooks must remain in your workspace at all times. What should you do?
Respuesta
  • Install Visual Studio Code on your local computer.
  • Create a Compute Instance compute target in your workspace.
  • Create a Training Cluster compute target in your workspace.

Pregunta 4

Pregunta
You plan to use the Workspace.from_config() method to connect to your Azure Machine Learning workspace from a Python environment on your local workstation. You have already used pip to install the azureml-sdk package. What else should you do?
Respuesta
  • Run pip install azureml-sdk['notebooks'] to install the notebooks extra
  • Download the config.json file for your workspace to the folder containing your local Python code files
  • Create a Compute Instance compute target in your workspace.

Pregunta 5

Pregunta
You need to ingest data from a CSV file into a pipeline in Designer. What should you do?
Respuesta
  • Create a Dataset by uploading the file, and drag the dataset to the canvas
  • Add a Convert to CSV module to the canvas.
  • Add an Enter Data Manually module to the canvas.

Pregunta 6

Pregunta
You have created a pipeline that includes multiple modules to define a dataflow and train a model. Now you want to run the pipeline. What must you do first?
Respuesta
  • Add comments to each of the modules on the pipeline canvas.
  • Rename the pipeline to include the date and time.
  • Create a Training Cluster in your workspace, and select it as the compute target for the pipeline.

Pregunta 7

Pregunta
You have created and run a pipeline to train a model using the Designer tool. Now you want to publish it as a real-time service. What must you do first?
Respuesta
  • Create an inference pipeline from your training pipeline.
  • Clone the training pipeline with a different name.
  • Change the compute target of the training pipeline to an Azure Kubernetes Services (AKS) cluster.

Pregunta 8

Pregunta
You have published a pipeline as a real-time service on an Azure Kubernetes Services (AKS) cluster. An application developer plans to call the service from a REST-based client. What information does the application developer require?
Respuesta
  • The name of the inference pipeline in designer
  • The endoint URL and key for the published service.
  • The name of the AKS compute target in the workspace.

Pregunta 9

Pregunta
You are using the Azure Machine Learning Python SDK to write code for an experiment. You need to record metrics from each run of the experiment, and be able to retrieve them easily from each run. What should you do?
Respuesta
  • Add print statements to the experiment code to print the metrics.
  • Use the log methods of the Run class to record named metrics.
  • Save the experiment data in the outputs folder

Pregunta 10

Pregunta
You want to run a script as an experiment. You have already created a RunConfig object to define the Python runtime context for the experiment. What other object should you create to associate the script with the runtime context?
Respuesta
  • A ScriptRunConfig object.
  • A Pipeline object.
  • A ComputeTarget object.

Pregunta 11

Pregunta
You have written a script that uses the Scikit-Learn framework to train a model. Which framework-specific estimator should you use to run the script as an experiment?
Respuesta
  • PyTorch
  • Tensorflow
  • SKLearn

Pregunta 12

Pregunta
You have run an experiment to train a model. You want the model to be stored in the workspace, and available to other experiments and published services. What should you do?
Respuesta
  • Register the model in the workspace.
  • Save the model as a file in a Compute Instance.
  • Save the experiment script as a notebook.

Pregunta 13

Pregunta
You are using the Azure Machine Learning Python SDK, and you need to create a reference to the default datastore in your workspace. You have written the following code: -- from azureml.core import Workspace #Get the workspace ws = Workspace.from_config() #Get the default datastore -- Which line of code should you add?
Respuesta
  • default_ds = ws.get_default_datastore()
  • default_ds = ws.set_default_datastore()
  • default_ds = ws.datastores.get('default')

Pregunta 14

Pregunta
You have uploaded some data files to a folder in a blob container, and registered the blob container as a datastore in your Azure Machine Learning workspace. You want to run a script as an experiment that loads the data files and trains a model. What should you do?
Respuesta
  • Save the experiment script in the same blob folder as the data files.
  • Create a data reference for the datastore location and pass it to the script as a parameter.
  • Create global variables for the Azure Storage account name and key in the experiment script.

Pregunta 15

Pregunta
You have a CSV file containing structured data that you want to use to train a model. You upload the file to a folder in an Azure Storage blob container, for which a datastore is defined in your workspace. Now you want to create a dataset for the data so that it can be easily used as a Pandas dataframe. Which kind of dataset should you create?
Respuesta
  • A file dataset
  • A tabular dataset

Pregunta 16

Pregunta
You have registered a dataset in your workspace. You want to use the dataset in an experiment script that is run using an estimator. What should you do?
Respuesta
  • Pass the dataset as a named input to the estimator.
  • Create a data reference for the datastore location where the dataset data is stored, and pass it to the script as a parameter.
  • Use the dataset to save the data as a CSV file in the experiment script folder before running the experiment.

Pregunta 17

Pregunta
You are using the Azure Machine Learning Python SDK to run experiments. You need to create an environment from a Conda configuration (.yml) file. Which method of the Environment class should you use?
Respuesta
  • create
  • create_from_conda_specification
  • create_from_existing_conda_environment

Pregunta 18

Pregunta
You have registered an environment in your workspace, and retrieved it using the following code: -- from azureml.core import Environment, Estimator env = Environment.get(workspace=ws, name='my_environment') estimator = Estimator(entry_script='training_script.py', ...) -- You want to use the environment as the Python context for an experiment script that is run using an estimator. Which property of the estimator should you set to assign the environment?
Respuesta
  • compute_target = env
  • environment_definition = env
  • source_directory = env

Pregunta 19

Pregunta
You need to create a compute target for training experiments that require a graphical processing unit (GPU). You want to be able to scale the compute so that multiple nodes are started automatically as required. Which kind of compute target should you create.
Respuesta
  • Compute Instance
  • Training Cluster
  • Inference Cluster

Pregunta 20

Pregunta
You are using an estimator to run an experiment, and you want to run it on a compute instance named training-cluster-1. Which property of the estimator should you set to run the experiment on training-cluster-1?
Respuesta
  • compute_target = 'training-cluster-1
  • environment_definition = 'training-cluster-1'
  • source_directory = 'training-cluster-1'

Pregunta 21

Pregunta
You are creating a pipeline that includes a step to train a model using an estimator. Which kind of step should you define in the pipeline for this task.
Respuesta
  • DatabricksStep
  • PythonScriptStep
  • EstimatorStep

Pregunta 22

Pregunta
You are creating a pipeline that includes two steps. Step 1 preprocesses some data, and step 2 uses the preprocessed data to train a model. What type of object should you use to pass data from step 1 to step 2 and create a dependency between these steps?
Respuesta
  • Datastore
  • PipelineData
  • Data Reference

Pregunta 23

Pregunta
You have used the Python SDK for Azure Machine Learning to create a pipeline that trains a model. What do you need to do to before publishing the pipeline?
Respuesta
  • Rename the pipeline to pipeline_name-production.
  • Run the pipeline as an experiment.
  • Create an inference cluster compute target.

Pregunta 24

Pregunta
You have published a pipeline that you want to run every week. You plan to use the Schedule.create method to create the schedule. What kind of object must you create first to configure how frequently the pipeline runs?
Respuesta
  • Datastore
  • PipelineParameter
  • ScheduleRecurrance

Pregunta 25

Pregunta
You have trained a model using the Python SDK for Azure Machine Learning. You want to deploy the model as a containerized real-time service with high scalability and security. What kind of compute should you create to host the service?
Respuesta
  • An Azure Kubernetes Services (AKS) inferencing cluster.
  • A compute instance with GPUs.
  • A training cluster with multiple nodes.

Pregunta 26

Pregunta
You are deploying a model as a real-time inferencing service. What functions must the entry script for the service include?
Respuesta
  • main() and predict(raw_data)
  • load() and score(raw_data)
  • init() and run(raw_data)

Pregunta 27

Pregunta
You are creating a batch inferencing pipeline that you want to use to predict new values for a large volume of data files? You want the pipeline to run the scoring script on multiple nodes and collate the results. What kind of step should you include in the pipeline?
Respuesta
  • PythonScriptStep
  • ParallelRunStep
  • AdlaStep

Pregunta 28

Pregunta
You have configured the step in your batch inferencing pipeline with an output_action="append_row" property. In which file should you look for the batch inferencing results?
Respuesta
  • parallel_run_step.txt
  • output.txt
  • stdoutlogs.txt

Pregunta 29

Pregunta
You plan to use hyperparameter tuning to find optimal discrete values for a set of hyperparameters. You want to try every possible combination of a set of specified discrete values. Which kind of sampling should you use?
Respuesta
  • Grid Sampling
  • Random Sampling
  • Bayesian Sampling

Pregunta 30

Pregunta
You are using hyper parameter tuning to train an optimal model. Your training script calculates the area under the curve (AUC) metric for the trained model like this: -- y_scores = model.predict_proba(X_test) auc = roc_auc_score(y_test,y_scores[:,1]) -- You define the hyperdrive configuration like this: -- hyperdrive = HyperDriveConfig(estimator=sklearn_estimator, hyperparameter_sampling=grid_sampling, policy=None, primary_metric_name='AUC', primary_metric_goal=PrimaryMetricGoal.MAXIMIZE, max_total_runs=6, max_concurrent_runs=4) -- Which code should you add to the training script?
Respuesta
  • run.log('Accuracy', np.float(auc))
  • print(auc)
  • run.log('AUC', np.float(auc))

Pregunta 31

Pregunta
You are using automated machine learning to train a model that predicts the species of an iris based on its petal and sepal measurements. Which kind of task should you specify for automated machine learning?
Respuesta
  • Regression
  • Classification
  • Forecasting

Pregunta 32

Pregunta
You have submitted an automated machine learning run using the Python SDk for Azure Machine Learning. When the run completes, which method of the run object should you use to retrieve the best model?
Respuesta
  • load_model()
  • get_output()
  • get_metrics()

Pregunta 33

Pregunta
You have trained a model, and you want to quantify the influence of each feature on a specific individual prediction. What kind of feature importance should you examine?
Respuesta
  • Global feature importance
  • Local feature importance

Pregunta 34

Pregunta
You are using automated machine learning, and you want to determine the influence of features on the predictions made by the best model produced by the automated machine learning experiment. What must you do when configuring the automated machine learning experiment?
Respuesta
  • Whitelist only tree-based algorithms.
  • Enable featurization.
  • Enable model explainability.

Pregunta 35

Pregunta
You want to create an explainer that applies the most appropriate SHAP model explanation algorithm based on the type of model. What kind of explainer should you create?
Respuesta
  • Mimic
  • Tabular
  • Permutation Feature Importance

Pregunta 36

Pregunta
You want to include model explanations in the logged details of your training experiment. What must you do in your training script?
Respuesta
  • Use the Run.log_table method to log feature importance for each feature.
  • Use the ExplanationClient.upload_model_explanation method to upload the explanation created by an Explainer.
  • Save the the explanation created by an Explainer in the ./outputs folder.

Pregunta 37

Pregunta
You have deployed a model as a real-time inferencing service in an Azure Kubernetes Service (AKS) cluster. What must you do to capture and analyze telemetry for this service?
Respuesta
  • Enable application insights.
  • Implement inference-time model interpretability.
  • Move the AKS cluster to the same region as the Azure Machine Learning workspace.

Pregunta 38

Pregunta
You want to include custom information in the telemetry for your inferencing service, and analyze it using Application Insights. What must you do in your service's entry script?
Respuesta
  • Use the Run.log method to log the custom metrics
  • Save the custom metrics in the ./outputs folder.
  • Use a print statement to write the metrics in the STDOUT log.

Pregunta 39

Pregunta
You have trained a model using a dataset containing data that was collected last year. As this year progresses, you will collect new data. You want to track any changing data trends that might affect the performance of the model. What should you do?
Respuesta
  • Collect the new data in a new version of the existing training dataset, and profile both datasets.
  • Collect the new data in a separate dataset and create a Data Drift Monitor with the training dataset as a baseline and the new dataset as a target.
  • Replace the training dataset with a new dataset that contains both the original training data and the new data

Pregunta 40

Pregunta
You are creating a data drift monitor. You want to automatically notify the data science team if a significant change in data distribution is detected. What must you do?
Respuesta
  • Define an AlertConfiguration and set a drift_threshold value
  • Set the latency of the data drift monitor to allow time for data scientists to review the new data.
  • Register the training dataset with the model, including the email address of the data science team as a tag
Mostrar resumen completo Ocultar resumen completo

Similar

Chino Mandarín Básico
Diego Santos
Literatura del siglo XVIII
Nerea Bermudez
Los Derechos Humanos
crisferroeldeluna
Esquema resumen de la Prehistoria
Francisco Ayén
Teoría de la Sintaxis
maya velasquez
Cultura Organizacional
Valeria Fernande
Divisas y Tipos de Cambio
Virginia Vera
EL UNIVERSO Y EL SISTEMA SOLAR
ROSA MARIA ARRIAGA
AUTORES-LIBROS
ROSA MARIA ARRIAGA
8 claves para el éxito personal y profesional
ke logz
Sistemas nervioso y reproductivo
JORGE LEYVA RIVERA