Set 3 - Part 1 - AZ-204

Beschreibung

Desarrollo de soluciones para Microsoft Azure AZ-204 Quiz am Set 3 - Part 1 - AZ-204, erstellt von Eduardo Zelaya am 08/11/2024.
Eduardo Zelaya
Quiz von Eduardo Zelaya, aktualisiert vor 18 Tage
Eduardo Zelaya
Erstellt von Eduardo Zelaya vor 20 Tage
0
0

Zusammenfassung der Ressource

Frage 1

Frage
You are developing a solution that uses the Azure Storage Client library for .NET. You have the following code: (Line numbers are included for reference only.) 01 CloudBlockBlob src = null; 02 try 03 { 04 src = container.ListBlobs().OfType<CloudBlockBlob>().FirstOrDefault(); 05 var id = await src.AcquireLeaseAsync(null); 06 var dst = container.GetBlockBlobReference(src.Name); 07 string cpid = await dst.StartCopyAsync(src); 08 await dst.FetchAttributesAsync(); 09 return id; 10 } 11 catch (Exception e) 12 { 13 throw; 14 } 15 finally 16 { 17 if (src != null) 18 await src.FetchAttributesAsync(); 19 if (src.Properties.LeaseState != LeaseState.Available) 20 await src.BreakLeaseAsync(new TimeSpan(0)); 21 } For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Answer Area Statement || Bool The code creates an infinite lease || [blank_start][Option 1][blank_end] The code at line 06 always creates a new blob || [blank_start][Option 2][blank_end] The finally block releases the lease || [blank_start][Option 3][blank_end]
Antworten
  • Yes
  • No
  • Yes
  • No
  • Yes
  • No

Frage 2

Frage
You are building a website that uses Azure Blob storage for data storage. You configure Azure Blob storage lifecycle to move all blobs to the archive tier after 30 days. Customers have requested a service-level agreement (SLA) for viewing data older than 30 days. You need to document the minimum SLA for data recovery. Which SLA should you use?
Antworten
  • at least two days
  • between one and 15 hours
  • at least one day
  • between zero and 60 minutes

Frage 3

Frage
You are developing a ticket reservation system for an airline. The storage solution for the application must meet the following requirements: ✑ Ensure at least 99.99% availability and provide low latency. ✑ Accept reservations even when localized network outages or other unforeseen failures occur. ✑ Process reservations in the exact sequence as reservations are submitted to minimize overbooking or selling the same seat to multiple travelers. ✑ Allow simultaneous and out-of-order reservations with a maximum five-second tolerance window. You provision a resource group named airlineResourceGroup in the Azure South-Central US region. You need to provision a SQL API Cosmos DB account to support the app. How should you complete the Azure CLI commands? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area resourceGroupName='airlineResourceGroup' name='docdb-airline-reservations' databaseName='docdb-tickets-database' collectionName='docdb-tickets-collection' consistencyLevel=[blank_start][Option 1][blank_end] az cosmodb create \ --name $name \ [blank_start][Option 2][blank_end] --resource-group $resourceGroupName \ --max-interval 5 \ [blank_start][Option 3][blank_end] --default-consistency-level = $consistencyLevel
Antworten
  • Strong
  • Eventual
  • ConsistentPrefix
  • BoundedStaleness
  • --enable-virtual-network true \
  • --enable-automatic-failover true \
  • --kind 'GlobalDocumentDB' \
  • --kind 'MongoDB' \
  • --locations 'southcentralus'
  • --locations 'eastus'
  • --locations 'southcentralus=0 eastus=0'
  • --locations 'southcentralus=0'

Frage 4

Frage
You are preparing to deploy a Python website to an Azure Web App using a container. The solution will use multiple containers in the same container group. The Dockerfile that builds the container is as follows: FROM python:3 ADD website.py CMD ["python", "./website.py"] You build a container by using the following command. The Azure Container Registry instance named images is a private registry. docker build -t images.azurecr.io/website:v1.0.0 The user name and password for the registry is admin. The Web App must always run the same version of the website regardless of future builds. You need to create an Azure Web App to run the website. How should you complete the commands? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area az configure --defaults web=website az configure --defaults group=website az appservice plan create --name websitePlan [blank_start][Option 1][blank_end] a) --sku SHARED b) --tags container c) --sku B1 --hyper-v d) --sku B1 --is-linux az webapp create --plan websitePlan [blank_start][Option 2][blank_end] a) --deployment-source-url images.azureci.io/website:v1.0.0 b) --deployment-source-url images.azureci.io/website:latest c) --deployment-container-image-name images.azureci.io/website:v1.0.0 d) --deployment-container-image-name images.azureci.io/website:latest az webapp config [blank_start][Option 3][blank_end] a) set --python-version 2.7 --generic-configurations user=admin password=admin b) set --python-version 3.6 --generic-configurations user=admin password=admin c) container set --docker-registry-server-url https://images.azureci.io -u admin -p admin d) container set --docker-registry-server-url https://images.azureci.io/website -u admin -p admin
Antworten
  • a
  • b
  • c
  • d
  • a
  • b
  • c
  • d
  • a
  • b
  • c
  • d

Frage 5

Frage
You are developing a back-end Azure App Service that scales based on the number of messages contained in a Service Bus queue. A rule already exists to scale up the App Service when the average queue length of unprocessed and valid queue messages is greater than 1000. You need to add a new rule that will continuously scale down the App Service as long as the scale up condition is not met. How should you configure the Scale rule? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area: Scale Rute > Metric source [blank_start][Option 1][blank_end] Criteria > Metric name [blank_start][Option 2][blank_end] Criteria > Time grain statistic [blank_start][Option 3][blank_end]
Antworten
  • Storage queue
  • Service Bus queue
  • Current resource
  • Storage queue (classic)
  • Message Count
  • Active Message Count
  • Total
  • Maximum
  • Average
  • Count

Frage 6

Frage
You have an application that uses Azure Blob storage. You need to update the metadata of the blobs. Which three methods should you use to develop the solution? To answer, move the appropriate methods from the list of methods to the answer area and arrange them in the correct order. Select and Place: Answer Area [blank_start][Option 1][blank_end] [blank_start][Option 2][blank_end] [blank_start][Option 3][blank_end]
Antworten
  • FetchAttributesAsync
  • Metadata.Add
  • SetMetadataAsync
  • SetPropertiesAsync
  • UploadFileStream

Frage 7

Frage
You are developing an Azure solution to collect point-of-sale (POS) device data from 2,000 stores located throughout the world. A single device can produce 2 megabytes (MB) of data every 24 hours. Each store location has one to five devices that send data. You must store the device data in Azure Blob storage. Device data must be correlated based on a device identifier. Additional stores are expected to open in the future. You need to implement a solution to receive the device data. Solution: Provision an Azure Event Grid. Configure the machine identifier as the partition key and enable capture. Does the solution meet the goal?
Antworten
  • True
  • False

Frage 8

Frage
You develop Azure solutions. A .NET application needs to receive a message each time an Azure virtual machine finishes processing data. The messages must NOT persist after being processed by the receiving application. You need to implement the .NET object that will receive the messages. Which object should you use?
Antworten
  • QueueClient
  • SubscriptionClient
  • TopicClient
  • CloudQueueClient

Frage 9

Frage
You are maintaining an existing application that uses an Azure Blob GPv1 Premium storage account. Data older than three months is rarely used. Data newer than three months must be available immediately. Data older than a year must be saved but does not need to be available immediately. You need to configure the account to support a lifecycle management rule that moves blob data to archive storage for data not modified in the last year. Which three actions should you perform in sequence? To answer, move the appropriate actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: Actions [Action 1] Upgrade the storage account to GPv2 [Action 2] Create a new GPv2 Standard account and set ins default access tier to cool [Action 3] Change the storage account access tier from hot to cool [Action 4] Copy the data to be archieved to a Standard GPv2 storage account and then delete the data from the original storage account Answer Area: [blank_start][Option 1][blank_end] [blank_start][Option 2][blank_end] [blank_start][Option 3][blank_end]
Antworten
  • Action 1
  • Action 2
  • Action 3
  • Action 4

Frage 10

Frage
You develop Azure solutions. You must connect to a No-SQL globally-distributed database by using the .NET API. You need to create an object to configure and execute requests in the database. Which code segment should you use?
Antworten
  • new Container(EndpointUri, PrimaryKey);
  • new Database(EndpointUri, PrimaryKey);
  • new CosmosClient(EndpointUri, PrimaryKey);

Frage 11

Frage
You have an existing Azure storage account that stores large volumes of data across multiple containers. You need to copy all data from the existing storage account to a new storage account. The copy process must meet the following requirements: ✑ Automate data movement. ✑ Minimize user input required to perform the operation. ✑ Ensure that the data movement process is recoverable. What should you use?
Antworten
  • AzCopy
  • Azure Storage Explorer
  • Azure portal
  • .NET Storage Client Library

Frage 12

Frage
You are developing a web service that will run on Azure virtual machines that use Azure Storage. You configure all virtual machines to use managed identities. You have the following requirements: ✑ Secret-based authentication mechanisms are not permitted for accessing an Azure Storage account. ✑ Must use only Azure Instance Metadata Service endpoints. You need to write code to retrieve an access token to access Azure Storage. To answer, drag the appropriate code segments to the correct locations. Each code segment may be used once or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Code segment 1 [Option 1] http://localhost:50342/oauth2/token [Option 2] http://169.254.169.254:50432/oauth2/token [Option 3] http://localhost/metadata/identity/oauth2/token [Option 4] http://169.254.169.254/metadata/identity/oauth2/token Code segment 2 [Option 1] XDocument.Parse(payload); [Option 2] new MultipartContent(payload); [Option 3] new NetworkCredential("Azure", payload); [Option 4] JsonConvert.DeserializeObject<Dictionary<string, string>>(payload); Answer Area var url = "[blank_start][Segment 1 Option][blank_end]"; var queryString = "..."; var client = new HttpClient(); var response = await client.GetAsync(url + queryString); var payload = await response.Content.ReadAsStringAsync(); return [blank_start][Segment 2 Option][blank_end]
Antworten
  • Segment 1 - Option 1
  • Segment 1 - Option 2
  • Segment 1 - Option 3
  • Segment 1 - Option 4
  • Segment 2 - Option 1
  • Segment 2 - Option 2
  • Segment 2 - Option 3
  • Segment 2 - Option 4

Frage 13

Frage
You are developing a new page for a website that uses Azure Cosmos DB for data storage. The feature uses documents that have the following format: { "name": "John", "city": "Seatle" } You must display data for the new page in a specific order. You create the following query for the page: SELECT* FROM People p ORDER BY p.name, p.city DESC You need to configure a Cosmos DB policy to support the query. How should you configure the policy? To answer, drag the appropriate JSON segments to the correct locations. Each JSON segment may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Select and Place: Answer Area: { "automatic": true, "ngMode": "Consistent", "includedPaths": [ { "path": "/*" } ], "excludedPaths": [], "[blank_start][Option 1][blank_end]": [ [ { "path": "/name", "order": "descending" }, { "path": "/city", "order": "[blank_start][Option 2][blank_end]" } ] ] }
Antworten
  • compositeIndexes
  • descending
  • orderBy
  • sortOrder
  • ascending

Frage 14

Frage
You are building a traffic monitoring system that monitors traffic along six highways. The system produces time series analysis-based reports for each highway. Data from traffic sensors are stored in Azure Event Hub. Traffic data is consumed by four departments. Each department has an Azure Web App that displays the time series-based reports and contains a WebJob that processes the incoming data from Event Hub. All Web Apps run on App Service Plans with three instances. Data throughput must be maximized. Latency must be minimized. You need to implement the Azure Event Hub. Which settings should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area: Setting || Value Number of partitions || [blank_start][Option 1][blank_end] Partition Key || [blank_start][Option 2][blank_end]
Antworten
  • 3
  • 4
  • 6
  • 12
  • Highway
  • Department
  • Timestamp
  • VM name

Frage 15

Frage
You are developing a microservices solution. You plan to deploy the solution to a multinode Azure Kubernetes Service (AKS) cluster. You need to deploy a solution that includes the following features: ✑ reverse proxy capabilities ✑ configurable traffic routing ✑ TLS termination with a custom certificate Which components should you use? To answer, drag the appropriate components to the correct requirements. Each component may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Answer Area: Action || Component Deploy solution. || [blank_start][Option 1][blank_end] View Cluster and external IP addressing. || [blank_start][Option 2][blank_end] Implement a single, public IP endpoint that is routed to multiple microservices. || [blank_start][Option 3][blank_end]
Antworten
  • Helm
  • KubeCtl
  • Ingress Controller
  • Draft
  • Brigade
  • CoreDNS
  • Virtual Kubelet

Frage 16

Frage
You are implementing an order processing system. A point of sale application publishes orders to topics in an Azure Service Bus queue. The Label property for the topic includes the following data: Property || Description ShipLocation || the country/region where the order will be shipped CorrelationId || a priority value for the order Quantity || a user-defined field that stores the quantity of items in an order AuditedAt || a user-defined field that records the date an order is audited The system has the following requirements for subscriptions: Subscription type || Comments FutureOrders || This subscription is reserved for future use and must not receive any orders HighPriorityOrders || Handle all high priority orders and international orders InternationalOrders || Handle orders where the country/region is not United States HighQuantityOrders || Handle only orders with quantities greater than 100 units AllOrders || This subscription is used for auditing purposes. This subscription must receive every single order. AllOrders has an Action defined that updates the AuditedAt property to include the date and time it was received by the subscription. You need to implement filtering and maximize throughput while evaluating filters. Which filter types should you implement? To answer, drag the appropriate filter types to the correct subscriptions. Each filter type may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each correct selection is worth one point. Answer Area: Subscription || Filter type FutureOrders || [blank_start][Option 1][blank_end] HighPriorityOrders || [blank_start][Option 2][blank_end] InternationalOrders || [blank_start][Option 3][blank_end] HighQuantityOrders || [blank_start][Option 4][blank_end] AllOrders || [blank_start][Option 5][blank_end]
Antworten
  • SQLFilter
  • CorrelationFilter
  • No Filter
  • SQLFilter
  • CorrelationFilter
  • No Filter
  • SQLFilter
  • CorrelationFilter
  • No Filter
  • SQLFilter
  • CorrelationFilter
  • No Filter
  • SQLFilter
  • CorrelationFilter
  • No Filter

Frage 17

Frage
Your company has several websites that use a company logo image. You use Azure Content Delivery Network (CDN) to store the static image. You need to determine the correct process of how the CDN and the Point of Presence (POP) server will distribute the image and list the items in the correct order. In which order do the actions occur? To answer, move all actions from the list of actions to the answer area and arrange them in the correct order. Select and Place: Actions: [Action 1] If no edge servers in the POP have the image in cache, the POP requests the file from the origin server. [Action 2] A user requests the image from the CDN URL. The DNS routes the request to the best performing POP location. [Action 3] Subsequent requests for the file may be directed to the same POP using the CDN logo image URL. The POP edge server returns the file from cache if TTL has no expired. [Action 4] The origin server returns the logo image to an edge server in the POP. An edge server in the POP. An edge server in the POP caches the logo image and returns the image to the client. Answer Area: [blank_start][Option 1][blank_end] [blank_start][Option 2][blank_end] [blank_start][Option 3][blank_end] [blank_start][Option 4][blank_end]
Antworten
  • Action 2
  • Action 1
  • Action 4
  • Action 3

Frage 18

Frage
You are developing an Azure Cosmos DB solution by using the Azure Cosmos DB SQL API. The data includes millions of documents. Each document may contain hundreds of properties. The properties of the documents do not contain distinct values for partitioning. Azure Cosmos DB must scale individual containers in the database to meet the performance needs of the application by spreading the workload evenly across all partitions over time. You need to select a partition key. Which two partition keys can you use? Each correct answer presents a complete solution. NOTE: Each correct selection is worth one point.
Antworten
  • a single property value that does not appear frequently in the documents
  • a value containing the collection name
  • a single property value that appears frequently in the documents
  • a concatenation of multiple property values with a random suffix appended
  • a hash suffix appended to a property value

Frage 19

Frage
You are developing an Azure-hosted e-commerce web application. The application will use Azure Cosmos DB to store sales orders. You are using the latest SDK to manage the sales orders in the database. You create a new Azure Cosmos DB instance. You include a valid endpoint and valid authorization key to an appSettings.json file in the code project. You are evaluating the following application code: (Line number are included for reference only.) 01 using System; 02 using System.Threading.Tasks; 03 using Microsoft.Azure.Cosmos; 04 using Microsoft.Extensions.Configuration; 05 using Newtonsoft.Json; 06 namespace SalesOrders 07 { 08 public class SalesOrder 09 { 10 . . . 11 } 12 internal class ManageSalesOrders 13 { 14 private static async Task GenerateSalesOrders() 15 { 16 IConfigurationRoot configuration = new ConfigurationBuilder().AddJsonFile("appSettings.json").Build(); 17 string endpoint = configuration["EndPointUrl"]; 18 string authKey = configuration["AuthorizationKey"]; 19 using CosmosClient client = new CosmosClient(endpoint, authKey); 20 Database database = null; 21 using (await client.GetDatabase("SalesOrders").DeleteStreamAsync()) { } 22 database = await client.CreateDatabaseIfNotExistsAsync("SalesOrders"); 23 Container container1 = await database.CreateContainerAsync(id: "Container1", partitionKeyPath: "/AccountNumber"); 24 Container container2 = await database.CreateContainerAsync(id: "Container2", partitionKeyPath: "/AccountNumber"); 25 SalesOrder salesOrder1 = new SalesOrder() { AccountNumber = "123456" }; 26 await container1.CreateItemAsync(salesOrder1, new PartitionKey(salesOrder1.AccountNumber)); 27 SalesOrder salesOrder2 = new SalesOrder() { AccountNumber = "654321" }; 28 await container1.CreateItemAsync(salesOrder2, new PartitionKey(salesOrder2.AccountNumber)); 29 SalesOrder salesOrder3 = new SalesOrder() { AccountNumber = "109876" }; 30 await container2.CreateItemAsync(salesOrder3, new PartitionKey(salesOrder3.AccountNumber)); 31 _ = await database.CreateUserAsync("User1"); 32 User user1 = database.GetUser("User1"); 33 _ = await user1.ReadAsync(); 34 } 35 } 36 } For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Answer Area: Statements || Bool A database named SalesOrders is created. The database will include two containers || [blank_start][Option 1][blank_end] Container1 will contain two items. || [blank_start][Option 2][blank_end] Container2 will contain one item. || [blank_start][Option 3][blank_end]
Antworten
  • Yes
  • No
  • Yes
  • No
  • Yes
  • No

Frage 20

Frage
You develop an Azure solution that uses Cosmos DB. The current Cosmos DB container must be replicated and must use a partition key that is optimized for queries. You need to implement a change feed processor solution. Which change feed processor components should you use? To answer, drag the appropriate components to the correct requirements. Each component may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view the content. NOTE: Each correct selection is worth one point. Select and Place: Answer Area: Requirement || component Store the data from which the change feed is generated. || [blank_start][Option 1][blank_end] Coordinate processing of the change feed across multiple workers. || [blank_start][Option 2][blank_end] Use the change feed processor to listen for changes. || [blank_start][Option 3][blank_end] Handle each batch of changes. || [blank_start][Option 4][blank_end]
Antworten
  • Monitored container
  • Lease container
  • Host
  • Delegate

Frage 21

Frage
You are developing a web application that will use Azure Storage. Older data will be less frequently used than more recent data. You need to configure data storage for the application. You have the following requirements: ✑ Retain copies of data for five years. ✑ Minimize costs associated with storing data that is over one year old. ✑ Implement Zone Redundant Storage for application data. What should you do? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area: Requirement || Solution Configure an Azure Storage accound || [blank_start][Option 1][blank_end] a) Implement Blob Storage b) Implement Azure Cosmos DB c) Implement Storage (general purpuse v1) d) Implement StorageV2 (general purpuse v2) Configure data retention || [blank_start][Option 2][blank_end] a) Snapshot blobs and move them to the archive tier b) Set a lifecycle management policy to move blobs to the cool tier c) Use AzCopy to copu the data to an on-premises device for backup d) Set a lifecycle management policy to move blobs to the archive tier
Antworten
  • a
  • b
  • c
  • d
  • a
  • b
  • c
  • d

Frage 22

Frage
A company develops a series of mobile games. All games use a single leaderboard service. You have the following requirements: ✑ Code must be scalable and allow for growth. ✑ Each record must consist of a playerId, gameId, score, and time played. ✑ When users reach a new high score, the system will save the new score using the SaveScore function below. Each game is assigned an Id based on the series title. You plan to store customer information in Azure Cosmos DB. The following data already exists in the database: PartitionKey || RowKey || Email Harp || Walter || wharp@contoso.com Smith || Steve || ssmith@contoso.com Smith || Jeff || jsmith@contoso.com You develop the following code to save scores in the data store. (Line numbers are included for reference only.) 01 public void SaveScore(string gameId, string playerId, int score, long timePlayed) 02 { 03 CloudStorageAccount storageAccount = CloudStorageAccount.Parse(connectionString); 04 CloudTableClient tableClient = storageAccount.CreateCloudTableClient(); 05 CloudTable table = tableClient.GetTableReference("scoreTable"); 06 table.CreateIfNotExists(); 07 var scoreRecord = new PlayerScore(gameId, playerId, score, timePlayed); 08 TableOperation insertOperation = TableOperation.Insert(scoreRecord); 09 table.Execute(insertOperation); 10 } You develop the following code to query the database. (Line numbers are included for reference only.) 01 CloudTableClient tableClient = account.CreateCloudTableClient(); 02 CloudTable table = tableClient.GetTableReference("people"); 03 TableQuery<CustomerEntity> query = new TableQuery<CustomerEntity>() 04 .Where(TableQuery.CombineFilters( 05 TableQuery.GenerateFilterCondition(PartitionKey, QueryComparisons.Equal, "Smith"), 06 TableOperators.And, TableQuery.GenerateFilterCondition(Email, QueryComparisons.Equal, "ssmith@contoso.com") 07 )); 08 await table.ExecuteQuerySegmentedAsync<CustomerEntity>(query, null); For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Answer Area: Statements || Bool SaveScore will wirk with Cosmos DB. || [blank_start][Option 1][blank_end] SaveScore will update and replace a record if one already exists with the same playerId and gameId || [blank_start][Option 2][blank_end] Leader board data for the game will be automatically partitioned using gameId. || [blank_start][Option 3][blank_end] SaveScore will store the values for the gameId and playerId parameters in the database. || [blank_start][Option 4][blank_end]
Antworten
  • Yes
  • No
  • Yes
  • No
  • Yes
  • No
  • Yes
  • No

Frage 23

Frage
You develop and deploy a web application to Azure App Service. The application accesses data stored in an Azure Storage account. The account contains several containers with several blobs with large amounts of data. You deploy all Azure resources to a single region. You need to move the Azure Storage account to the new region. You must copy all data to the new region. What should you do first?
Antworten
  • Export the Azure Storage account Azure Resource Manager template
  • Initiate a storage account failover
  • Configure object replication for all blobs
  • Use the AzCopy command line tool
  • Create a new Azure Storage account in the current region
  • Create a new subscription in the current region

Frage 24

Frage
You are developing an application to collect the following telemetry data for delivery drivers: first name, last name, package count, item id, and current location coordinates. The app will store the data in Azure Cosmos DB. You need to configure Azure Cosmos DB to query the data. Which values should you use? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area: Configuration Parameter || Value Azure Cosmos DB API || [blank_start][Option 1][blank_end] Azure Cosmos DB partition key || [blank_start][Option 2][blank_end]
Antworten
  • Gremlin
  • Table API
  • Core (SQL)
  • first name
  • last name
  • package count
  • item id

Frage 25

Frage
You are implementing an Azure solution that uses Azure Cosmos DB and the latest Azure Cosmos DB SDK. You add a change feed processor to a new container instance. You attempt to read a batch of 100 documents. The process fails when reading one of the documents. The solution must monitor the progress of the change feed processor instance on the new container as the change feed is read. You must prevent the change feed processor from retrying the entire batch when one document cannot be read. You need to implement the change feed processor to read the documents. Which features should you use? To answer, drag the appropriate features to the cored requirements. Each feature may be used once, more than once, or not at all. You may need to drag the split bar between panes or scroll to view content. NOTE: Each cored selection is worth one point. Answer Area: Requirement || Feature Monitor the progress of the change feed processor || [blank_start][Option 1][blank_end] Prevent the change feed processor from retrying the entire batch when one document cannot be read || [blank_start][Option 2][blank_end]
Antworten
  • Change feed estimator
  • Dead-letter queue
  • Deployment unit
  • Lease container

Frage 26

Frage
You are developing an application that uses a premium block blob storage account. The application will process a large volume of transactions daily. You enable Blob storage versioning. You are optimizing costs by automating Azure Blob Storage access tiers. You apply the following policy rules to the storage account. (Line numbers are included for reference only.) 01 { 02 "rules": [ 03 { 04 "name": "versionRule", 05 "enabled": true, 06 "type": "Lifecycle", 07 "definition": { 08 "actions": { 09 "version": { 10 "tierToCool": { 11 "daysAfterCreationGreaterThan": 60 12 } 13 }, 14 "delete": { 15 "daysAfterCreationGraterThan": 365 16 } 17 } 18 }, 19 "filters": { 20 "blobTypes": ["blockBlob"], 21 "prefixMatch": ["transactions"] 22 } 23 } 24 ] 25 } For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Answer Area: Statements || Bool Block blobs prefixed with transactions will transition blobs that have not been modified in over 60 days to cool storage, and delete blobs not modified in 365 days || [blank_start][Option 1][blank_end] Blobs are moved to cool storage if they have not been accessed for 60 days || [blank_start][Option 2][blank_end] The policy rule tiers previous versions within a container named transactions that are 60 days or older to the cool tier and deletes previous versions that are 365 days or older || [blank_start][Option 3][blank_end] Blobs will automatically be tiered from cool back to hot if accessed again after being tiered to cool || [blank_start][Option 4][blank_end]
Antworten
  • Yes
  • No
  • Yes
  • No
  • Yes
  • No
  • Yes
  • No

Frage 27

Frage
An organization deploys Azure Cosmos DB. You need to ensure that the index is updated as items are created, updated, or deleted. What should you do?
Antworten
  • Set the indexing mode to Lazy.
  • Set the value of the automatic property of the indexing policy to False.
  • Set the value of the EnableScanInQuery option to True.
  • Set the indexing mode to Consistent.

Frage 28

Frage
You are developing a .Net web application that stores data in Azure Cosmos DB. The application must use the Core API and allow millions of reads and writes. The Azure Cosmos DB account has been created with multiple write regions enabled. The application has been deployed to the East US2 and Central US regions. You need to update the application to support multi-region writes. What are two possible ways to achieve this goal? Each correct answer presents part of the solution. NOTE: Each correct selection is worth one point.
Antworten
  • Update the ConnectionPolicy class for the Cosmos client and populate the PreferredLocations property based on the geo-proximity of the application.
  • Update Azure Cosmos DB to use the Strong consistency level. Add indexed properties to the container to indicate region.
  • Update the ConnectionPolicy class for the Cosmos client and set the UseMultipleWriteLocations property to true.
  • Create and deploy a custom conflict resolution policy.
  • Update Azure Cosmos DB to use the Session consistency level. Send the SessionToken property value from the FeedResponse object of the write action to the end-user by using a cookie.

Frage 29

Frage
You are developing a solution to store documents in Azure Blob storage. Customers upload documents to multiple containers. Documents consist of PDF, CSV, Microsoft Office format and plain text files. The solution must process millions of documents across hundreds of containers. The solution must meet the following requirements: ✑ Documents must be categorized by a customer identifier as they are uploaded to the storage account. ✑ Allow filtering by the customer identifier. ✑ Allow searching of information contained within a document ✑ Minimize costs. You create and configure a standard general-purpose v2 storage account to support the solution. You need to implement the solution. What should you implement? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area: Requirement || Solution Search and filter by customer identifier || [blank_start][Option 1][blank_end] Search information inside documents ||[blank_start][Option 2][blank_end]
Antworten
  • Azure Cognitive Search
  • Azure Blob index tags
  • Azure Blob inventory policy
  • Azure Blob metadata
  • Azure Cognitive Search
  • Azure Blob index tags
  • Azure Blob inventory policy
  • Azure Blob metadata

Frage 30

Frage
You are developing a web application by using the Azure SDK. The web application accesses data in a zone-redundant BlockBlobStorage storage account. The application must determine whether the data has changed since the application last read the data. Update operations must use the latest data changes when writing data to the storage account. You need to implement the update operations. Which values should you use? To answer, select the appropriate option in the answer area. NOTE: Each correct selection is worth one point. Answer Area: Code evaluation || Value HTTP Header value || [blank_start][Option 1][blank_end] Conditional header || [blank_start][Option 2][blank_end]
Antworten
  • ETag
  • Last Modified
  • VersionId
  • If-Match
  • If-Modified-Since
  • If-None-Match

Frage 31

Frage
An organization deploys a blob storage account. Users take multiple snapshots of the blob storage account over time. You need to delete all snapshots of the blob storage account. You must not delete the blob storage account itself. How should you complete the code segment? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area: Delete (Azure.Storage.Blobs.Models.DeleteSnapshotsOption snapshotsOption = Azure.Storage.Blobs.Models.[blank_start][Option 1][blank_end].[blank_start][Option 2][blank_end])
Antworten
  • DeleteIfExists
  • DeleteSnapshotsOption
  • WithSnapshot
  • WithSnapshotCore
  • IncludeSnapshots
  • None
  • OnlySnapshots

Frage 32

Frage
An organization deploys a blob storage account. Users take multiple snapshots of the blob storage account over time. You need to delete all snapshots of the blob storage account. You must not delete the blob storage account itself. How should you complete the code segment? To answer, select the appropriate options in the answer area. NOTE: Each correct selection is worth one point. Answer Area: delete_blob([blank_start][Option 1][blank_end] = [blank_start][Option 2][blank_end])
Antworten
  • delete_container
  • delete_snapshots
  • snapshot_blob
  • snapshots_present
  • False
  • Include
  • Only
Zusammenfassung anzeigen Zusammenfassung ausblenden

ähnlicher Inhalt

Set 2 Part 1 - AZ-204
Eduardo Zelaya
Set 2 - Parte 2 - AZ-204
Eduardo Zelaya
Set 3 - Part 2 - AZ-204
Eduardo Zelaya
ein kleines Informatik Quiz
AntonS
Teil B, Kapitel 1.3, Handelsregister
Stefan Kurtenbach
Juraexamen Karteikarten - Strafrecht
anna.grillborzer0656
Lungenembolie
ak.budde83
Sachversicherungen
Christine Zehnder
The Commonwealth
Laura D
IKA-Theoriefragen Serie 20 (15 Fragen)
IKA ON ICT GmbH