SQL Server 2014 - Part 2

Beschreibung

Intermediary SQL Server 2014 Quiz am SQL Server 2014 - Part 2, erstellt von Paul Doman am 04/11/2015.
Paul Doman
Quiz von Paul Doman, aktualisiert more than 1 year ago
Paul Doman
Erstellt von Paul Doman vor mehr als 8 Jahre
62
2

Zusammenfassung der Ressource

Frage 1

Frage
When receiving messages through the Service Broker, it's common practice to fetch the next [blank_start]message[blank_end] from a [blank_start]queue[blank_end], then parse the [blank_start]message type[blank_end] to determine what action to take.
Antworten
  • message
  • queue
  • message type

Frage 2

Frage
When receiving messages through the Service Broker, it's more efficient to select [blank_start]multiple[blank_end] messages from the queue into a [blank_start]table[blank_end] variable, containing the message [blank_start]body[blank_end] in [blank_start]binary[blank_end] format - before stepping through the table variable rows converting the message body into it's native format at that stage, ready for actioning.
Antworten
  • multiple
  • table
  • body
  • binary

Frage 3

Frage
SQL Server Service Broker can send messages to queues within the same database, different databases on the same instance, different instances or different servers. True or False?
Antworten
  • True
  • False

Frage 4

Frage
When using the Service Broker, you receive messages by using the [blank_start]RECEIVE[blank_end] command. This command is written much like a traditional [blank_start]SELECT[blank_end] statement. You can specify the [blank_start]columns[blank_end] that should be returned, and the [blank_start]queue[blank_end] is specified using a [blank_start]FROM[blank_end] statement.
Antworten
  • RECEIVE
  • SELECT
  • columns
  • queue
  • FROM

Frage 5

Frage
When receiving messages from the Service Broker, it is more efficient to receive multiple [blank_start]messages[blank_end] at once, and to receive the message [blank_start]body[blank_end] from the queue as the raw [blank_start]binary[blank_end], and then convert it to [blank_start]XML[blank_end] (or whatever data type it was sent as) after it has been removed from the queue.
Antworten
  • messages
  • body
  • binary
  • XML

Frage 6

Frage
The steps involved to enable the Service Broker to send messages between instances are... 1. Configure the database [blank_start]master[blank_end] key in the [blank_start]master[blank_end] database. 2. Configure the database [blank_start]master[blank_end] key in the application database. 3. Create a [blank_start]certificate[blank_end] in each database. 4. Exchange the [blank_start]certificates[blank_end] between the databases. 5. Create SQL Service Broker [blank_start]endpoints[blank_end] on each instance. 6. Configure [blank_start]routes[blank_end] to connect to the remote instance SQL Service Broker endpoint.
Antworten
  • master
  • master
  • master
  • certificate
  • certificates
  • endpoints
  • routes

Frage 7

Frage
When encrypting communications between servers using Service Broker, you create the database master key (if it's not already been run) using the following command.... CREATE [blank_start]MASTER[blank_end] [blank_start]KEY[blank_end] ENCRYPTION BY [blank_start]PASSWORD[blank_end] = 'YourSecurePassword1!'
Antworten
  • MASTER
  • KEY
  • PASSWORD

Frage 8

Frage
If a database master key has been created, it is imperative that it's backed up to a file and stored offsite. In order to do this you would use the following command... BACKUP [blank_start]MASTER[blank_end] [blank_start]KEY[blank_end] TO [blank_start]FILE[blank_end] = '<filename>' [blank_start]ENCRYPTION[blank_end] BY [blank_start]PASSWORD[blank_end] = '<secure-password>'
Antworten
  • MASTER
  • KEY
  • FILE
  • ENCRYPTION
  • PASSWORD

Frage 9

Frage
When using certificate authentication between endpoints, you must create [blank_start]certificates[blank_end] in the [blank_start]master[blank_end] databases of the instances that exchange messages. You can create [blank_start]certificates[blank_end] using the [blank_start]CREATE CERTIFICATE[blank_end] statement within the [blank_start]master[blank_end] database. When creating the certificates on each instance, they must have a [blank_start]unique name[blank_end]. The recommended way to do this is to include the server and instance name.
Antworten
  • certificates
  • master
  • certificates
  • CREATE CERTIFICATE
  • master
  • unique name

Frage 10

Frage
When creating certificates for securing transmissions between server instances in the Service Broker, you must exchange certificates... you do this using the following combinations of commands...
Antworten
  • BACKUP CERTIFICATE / RESTORE CERTIFICATE
  • BACKUP CERTIFICATE / CREATE CERTIFICATE
  • EXPORT CERTIFICATE / IMPORT CERTIFICATE
  • EXPORT CERTIFICATE / CREATE CERTIFICATE
  • BACKUP CERTIFICATE TO REMOTE SERVER
  • CREATE CERTIFICATE FROM REMOTE SERVER

Frage 11

Frage
Within the Service Broker, for SQL Server instances to send messages to another instance, you must create [blank_start]endpoints[blank_end] on each SQL Server instance.
Antworten
  • endpoints

Frage 12

Frage
When configuring a target SQL server within the Service Broker, as a central server for receiving messages from multiple satellite SQL instances - which of the following is true?
Antworten
  • You must create an Endpoint on the Target Server for every connecting satellite instance
  • You must create an Endpoint on the Target Server to support any connecting Service Brokers
  • You add the target server as a linked server on each satellite server
  • You add each satellite server as a linked server on the target - and GRANT them service access
  • This topology is not supported within the Service Broker

Frage 13

Frage
When creating Endpoints on a SQL Server, that can support cross-instance communication - which of the following are valid Authentication Mechanisms?
Antworten
  • WINDOWS NTLM
  • WINDOWS KERBEROS
  • WINDOWS NEGOTIATE
  • CERTIFICATE
  • ANY
  • ANONYMOUS
  • WINDOWS TRUST

Frage 14

Frage
If creating an Endpoint to support cross instance communications, how does the following clause affect the method of authentication? WINDOWS NEGOTIATE CERTIFICATE <MyCertificate>
Antworten
  • Attempt to authenticate by either NTLM or KERBEROS authentication (using the Windows Negotiation Protocol) - failing that, use the certificate MyCertificate to authenticate.
  • Attempt to authenticate using NTLM authentication - failing that, use the certificate MyCertificate to authenticate.
  • Attempt to authenticate using KERBEROS authentication - failing that, use the certificate MyCertificate to authenticate.
  • Attempt to authenticate by either NTLM or KERBEROS authentication (using the Windows Negotiation Protocol) using the certificate MyCertificate to secure the negotiation
  • Attempt to authenticate using NTLM authentication using the certificate MyCertificate to secure the negotiation
  • Attempt to authenticate using KERBEROS authentication using the certificate MyCertificate to secure the negotiation
  • It's not a valid Authentication clause

Frage 15

Frage
When creating a SQL Server Endpoint, which of the following are valid ENCRYPTION options?
Antworten
  • DISABLED
  • SUPPORTED
  • REQUIRED
  • OFF
  • MANDATORY
  • NONE
  • ANY

Frage 16

Frage
When creating an End Point for SQL Server Service Broker, what does the following clause mean? ENCRYPTION = REQUIRED ALGORITHM AES RC4
Antworten
  • Encryption is required to use the End Point, using either the AES or RC4 encryption algorithm
  • Encryption is required to use the End Point, using either the AES or RC4 encryption algorithm - preferring AES through negotiation.
  • Encryption is required to use the End Point, using the AES encryption algorithm for data transport, and the RC4 algorithm for hashing the encryption key.
  • Encryption is supported by the End Point, using either the AES or RC4 encryption algorithm
  • Encryption is supported by the End Point, using either the AES or RC4 encryption algorithm - preferring AES through negotiation.
  • Encryption is supported by the End Point, using the AES encryption algorithm for data transport, and the RC4 algorithm for hashing the encryption key.
  • It's not a valid command

Frage 17

Frage
When configuring an endpoint, the only encryption algorithms considered secure are AES and RC4 - true or false?
Antworten
  • True
  • False

Frage 18

Frage
In order to create an endpoint called 'ServiceBrokerEndpoint' - that listens on port 1234 on all IPv4 and IPv6 addresses hosted by the Server, using certificate 'MyServiceBrokerCertificate', requiring strong encryption - you would use the following command... CREATE ENDPOINT [blank_start]ServiceBrokerEndpoint[blank_end] STATE = STARTED AS TCP ([blank_start]LISTENER_PORT[blank_end] = 1234, LISTENER_[blank_start]IP[blank_end]=[blank_start]ALL[blank_end]) FOR SERVICE_BROKER (AUTHENTICATION = CERTIFICATE [blank_start]MyServiceBrokerCertificate[blank_end], [blank_start]ENCRYPTION[blank_end] = [blank_start]REQUIRED[blank_end] ALGORITHM [blank_start]AES[blank_end]);
Antworten
  • ServiceBrokerEndpoint
  • LISTENER_PORT
  • IP
  • ALL
  • MyServiceBrokerCertificate
  • ENCRYPTION
  • REQUIRED
  • AES

Frage 19

Frage
T-SQL is a data access and management language, whereas [blank_start]CLR[blank_end] managed code is better suited for complex application logic.
Antworten
  • CLR

Frage 20

Frage
One of the great benefits of SQLCLR is that any .NET code that SQL Server runs is completely isolated from SQL Server itself. .NET code runs within the SQL Server process space, but SQL Server uses a construct in .NET called the application [blank_start]domain[blank_end] to completely isolate and separate all resources that the .NET code uses from the resources that SQL Server uses.
Antworten
  • domain

Frage 21

Frage
The primary design goal of placing assemblies in application [blank_start]domains[blank_end] is to achieve scalability and security. Also, application domains have existed for quite a while to provide a form of [blank_start]isolation[blank_end] between applications. This is necessary to ensure that code running in one application cannot (and does not) affect other unrelated applications.
Antworten
  • domains
  • isolation

Frage 22

Frage
SQL Server isolates code between databases by using application [blank_start]domains[blank_end]. As such, application [blank_start]domains[blank_end] exist for each [blank_start]database[blank_end] that allows you to create, load, and register an assembly. This ensures that code can be executed independently of other assemblies registered in other [blank_start]databases[blank_end], in isolation.
Antworten
  • domains
  • domains
  • database
  • databases

Frage 23

Frage
You can have multiple .NET assemblies registered within a single database - True or False?
Antworten
  • True
  • False

Frage 24

Frage
Transact-SQL is the preferred to using .NET Managed code to quickly access files in the file system, because of the additional overhead of dynamically loading an assembly at runtime. True or False?
Antworten
  • True
  • False

Frage 25

Frage
By default, CLR is disabled for users. This means that you cannot execute any .NET code until you purposefully enable CLR. Not everyone can enable CLR; only members of the [blank_start]sysadmin[blank_end] and [blank_start]serveradmin[blank_end] server roles can do so. In order to do this, they must issue the following commands... EXEC sp_[blank_start]configure[blank_end] '[blank_start]clr enabled[blank_end]', 1 GO [blank_start]RECONFIGURE[blank_end] GO
Antworten
  • sysadmin
  • serveradmin
  • configure
  • clr enabled
  • RECONFIGURE

Frage 26

Frage
If you wish to register a .NET CLR library within SQL Server, you should...
Antworten
  • run the CREATE ASSEMBLY command in the database that wishes to use the library
  • run the IMPORT ASSEMBLY command in the database that wishes to use the library
  • run the CREATE ASSEMBLY command in the msdb database
  • run the IMPORT ASSEMBLY command in the msdb database
  • run the INSTALL ASSEMBLY command in the database that wishes to use the library
  • run the INSTALL ASSEMBLY command in the msdb database

Frage 27

Frage
In order to reference a CLR written method, 'RunMe', that returns a string message, within the class 'SQLHelper' - built into the pre-registered assembly 'MySQLFunctions', you would use the following statement... CREATE [blank_start]PROCEDURE[blank_end] clrRunMe @retVal nchar(50) [blank_start]OUTPUT[blank_end] AS [blank_start]EXTERNAL[blank_end] NAME [blank_start]MySQLFunctions.SQLHelper.RunMe[blank_end]
Antworten
  • PROCEDURE
  • OUTPUT
  • EXTERNAL
  • MySQLFunctions.SQLHelper.RunMe

Frage 28

Frage
In order to provide strict security - CLR supports a security model called [blank_start]Control Access Security[blank_end] (CAS) for managed code. In this model, permissions are given to assemblies based on the identity of the code. The set of permissions that can be granted to the assemblies by the SQL Server host policy level are determined by the permission that is set and specified during the creation of the assembly in SQL Server. SQLCLR supports three permission sets: [blank_start]SAFE[blank_end]—Only local data access and internal computations are allowed. If no permission is specified during the assembly creation, this permission is applied by default. No access to external system resources such as files or the registry exists. [blank_start]EXTERNAL_ACCESS[blank_end]—This is the same permission as the above, but with the added capability to access external resources such as the filesystem, registry, networks, and environment variables. [blank_start]UNSAFE[blank_end]—This means unrestricted access to all resources within SQL Server and outside SQL Server. This is the least secure and should rarely be used.
Antworten
  • Control Access Security
  • SAFE
  • EXTERNAL_ACCESS
  • UNSAFE

Frage 29

Frage
The following counters are extremely helpful in understanding the health and activity of .NET CLR programs running in a SQL-hosted environment: .NET CLR [blank_start]Memory[blank_end]—This provides detailed information about the types of CLR heap memory and garbage collection. These counters can be used to monitor CLR memory usage, and to flag alerts if the memory used gets too large. If the code is copying a lot of data into memory, you may have to check the code and take a different approach to reduce memory consumption, or add more memory. .NET CLR [blank_start]Loading[blank_end]—SQL Server isolates code between databases by using AppDomain. This set of counters enables monitoring of the number of AppDomains and the number of assemblies loaded in the system. You can use this counter to determine loaded CLR assemblies. .NET CLR [blank_start]Exceptions[blank_end]—This counter provides you with a good idea of how many errors the code generates. The values vary from application to application because sometimes developers use exceptions to test application functionality, so you should monitor over time to set the baseline and go from there. As this number increases, performance decreases.
Antworten
  • Memory
  • Loading
  • Exceptions

Frage 30

Frage
There are two key Extended Events for monitoring CLR objects in SQL Server. The [blank_start]assembly_load[blank_end] event occurs when a request to access an assembly occurs, and is used to monitor queries running CLR code. The clr_[blank_start]allocation[blank_end]_failure event occurs when managed code experiences a memory [blank_start]allocation[blank_end] failure.
Antworten
  • assembly_load
  • allocation
  • allocation

Frage 31

Frage
Dynamic Management Views (DMVs) return server state information that you can use to monitor the health of a server pertaining to SQLCLR: sys.dm_clr_[blank_start]appdomains[blank_end]—Returns a row for each application domain in the server. sys.dm_clr_[blank_start]loaded[blank_end]_[blank_start]assemblies[blank_end]—Returns a row for each managed user assembly loaded into the server address space. sys.dm_clr_[blank_start]properties[blank_end]—Returns a row for each property related to SQL Server CLR integration, including the version and the state of the hosted CLR. sys.dm_clr_[blank_start]tasks[blank_end]—Returns a row for all CLR tasks currently running.
Antworten
  • appdomains
  • loaded
  • assemblies
  • properties
  • tasks

Frage 32

Frage
To provide more insight into the operation and execution of CLR assemblies, you can use the following DMVs to give you that information for CLR assemblies: sys.dm_exec_[blank_start]cached[blank_end]_[blank_start]plans[blank_end]—You can use this to view a cached query plan for a CLR query. sys.dm_exec_[blank_start]query[blank_end]_[blank_start]stats[blank_end]—This contains a row per query statement within the cached plan.
Antworten
  • cached
  • plans
  • query
  • stats

Frage 33

Frage
Best practice states that a CLR assembly should not be used to [blank_start]query[blank_end] data. This is what [blank_start]T-SQL[blank_end] is meant to do. They should not spend their time accessing data. Send the data you want worked on to the assembly, instead of having the assembly pull it from SQL Server.
Antworten
  • query
  • T-SQL

Frage 34

Frage
Securing data consists of two essential parts: [blank_start]authentication[blank_end] (which is proving you are who you say you are) and [blank_start]authorization[blank_end] (which defines the data you have access to, and what you can do to the data).
Antworten
  • authentication
  • authorization

Frage 35

Frage
You have two ways to authenticate to the Microsoft SQL Server instance: via [blank_start]SQL Server[blank_end] authentication and via [blank_start]Windows[blank_end] authentication. When you install SQL Server, you have an option to select whether the SQL Server instance should support [blank_start]Windows[blank_end] authentication only, or whether it should support both [blank_start]Windows[blank_end] and [blank_start]SQL Server[blank_end] authentication.
Antworten
  • SQL Server
  • Windows
  • Windows
  • Windows
  • SQL Server

Frage 36

Frage
With SQL Server authentication, the actual username and password are stored in the [blank_start]master[blank_end] database within the database instance.
Antworten
  • master

Frage 37

Frage
When you use SQL Server authentication, the account and password are passed to the database instance, which then [blank_start]hashes[blank_end] the password and compares the username and password [blank_start]hash[blank_end] against the list of SQL accounts stored within the master database.
Antworten
  • hashes
  • hash

Frage 38

Frage
You can configure SQL Server logins to follow the Windows password [blank_start]security policies[blank_end] to enforce password [blank_start]strength[blank_end] and password [blank_start]expiration[blank_end].
Antworten
  • security policies
  • strength
  • expiration

Frage 39

Frage
A best practice is to authenticate to SQL Server with a Windows [blank_start]group[blank_end]. That way, the SQL Server administrator only has to configure a small number of [blank_start]groups[blank_end] one time, and then the Windows security administrators can control access by assigning users to the Windows [blank_start]groups[blank_end] configured as [blank_start]logins[blank_end] in SQL Server.
Antworten
  • group
  • groups
  • groups
  • logins

Frage 40

Frage
Because of the differences in SQL Server and Windows authentication, [blank_start]SQL Server[blank_end] authentication is considered to be much less secure, and should be disabled whenever possible.
Antworten
  • SQL Server

Frage 41

Frage
A SQL Server [blank_start]login[blank_end] is an identity that is configured in the SQL Server instance. The [blank_start]login[blank_end] is used to authenticate a client to SQL Server. When an administrator creates a [blank_start]login[blank_end], it is associated with credentials—a SQL Server [blank_start]password[blank_end] or Windows [blank_start]identity[blank_end]
Antworten
  • login
  • login
  • login
  • password
  • identity

Frage 42

Frage
Because the database [blank_start]user[blank_end] is associated with a [blank_start]login[blank_end] stored in the [blank_start]master[blank_end] database of the instances, if a database is moved to a different SQL Server instance, the [blank_start]logins[blank_end] associated with the database [blank_start]users[blank_end] must be created in the new instance to allow users to access the database.
Antworten
  • user
  • login
  • master
  • logins
  • users

Frage 43

Frage
SQL Server 2012 introduced a new concept called the [blank_start]contained[blank_end] user. These users exist only within a single database and are not associated with an instance-level [blank_start]login[blank_end].
Antworten
  • contained
  • login

Frage 44

Frage
A [blank_start]contained[blank_end] [blank_start]Windows[blank_end] [blank_start]user[blank_end] is simply a Windows account that doesn't have a corresponding login at the server level.
Antworten
  • contained
  • Windows
  • user

Frage 45

Frage
Proper [blank_start]object[blank_end]-[blank_start]level[blank_end] security within the database is key to keeping data within the SQL Server instance safe from intruders
Antworten
  • object
  • level

Frage 46

Frage
The following three statements are used when changing permissions in SQL Server: [blank_start]GRANT[blank_end] is used to assign rights. [blank_start]DENY[blank_end] is used to prevent access. [blank_start]REVOKE[blank_end] is used to remove either of the above
Antworten
  • GRANT
  • DENY
  • REVOKE

Frage 47

Frage
When granting permissions in SQL Server, you must remember that [blank_start]DENY[blank_end] always overwrites a [blank_start]GRANT[blank_end].
Antworten
  • DENY
  • GRANT

Frage 48

Frage
If we needed to assign rights for selecting and inserting data into the Users table to a user called MyUser - and gave them the ability to the same access to others... you would use the syntax... [blank_start]GRANT[blank_end] [blank_start]SELECT[blank_end], [blank_start]INSERT[blank_end] ON dbo.Users TO [blank_start]MyUser[blank_end] WITH [blank_start]GRANT[blank_end]
Antworten
  • GRANT
  • SELECT
  • INSERT
  • MyUser
  • GRANT

Frage 49

Frage
The biggest difference between instance-wide privileges and database-wide privileges is that instance-wide privileges are granted directly to the [blank_start]login[blank_end], whereas database-wide privileges are granted to [blank_start]users[blank_end], and these [blank_start]users[blank_end] are mapped to [blank_start]logins[blank_end]
Antworten
  • login
  • users
  • users
  • logins

Frage 50

Frage
In-memory [blank_start]OLTP[blank_end] is an enterprise-level feature that brings to businesses and users of all sizes performance that was previously only attainable to organizations that could afford to spend several hundreds of thousands of dollars on hardware. It is available in SQL Server 2014 Developer, Evaluation, and Enterprise editions only. To perform the exercises that follow, you must be running one of those editions. In-memory [blank_start]OLTP[blank_end] is also referred to as [blank_start]memory[blank_end] [blank_start]optimized[blank_end] tables.
Antworten
  • OLTP
  • OLTP
  • memory
  • optimised

Frage 51

Frage
When using In Memory [blank_start]OLTP[blank_end], No [blank_start]schema[blank_end] changes are permitted once the memory optimized table has been created. No ALTER [blank_start]TABLE[blank_end] statements are allowed, all [blank_start]indexes[blank_end] must be created inline as there is no support for [blank_start]CREATE[blank_end], [blank_start]DROP[blank_end], or [blank_start]ALTER[blank_end] INDEX statements.
Antworten
  • OLTP
  • schema
  • TABLE
  • indexes
  • CREATE
  • DROP
  • ALTER

Frage 52

Frage
A memory-optimized table can have between [blank_start]1[blank_end] and [blank_start]8[blank_end] indexes, and, except for the [blank_start]primary key[blank_end] index, they cannot be [blank_start]unique[blank_end].
Antworten
  • 1
  • 8
  • primary key
  • unique

Frage 53

Frage
When using In Memory OLTP, consider the scenario. You will be doing a lot of seek operations. You want to have specific queries where you are getting only one row, or a set of very specific rows return. What type of index should you use?
Antworten
  • Unique Index
  • Range Index
  • Hash Index
  • Primary Key Index
  • Compound Index

Frage 54

Frage
When using In Memory OLTP, if you wanted to search on a range of values within a memory optimised table - e.g. such as orders that occurred after a different date, then a hash index is not the type of index you should use. You should use a [blank_start]range[blank_end] index.
Antworten
  • range

Frage 55

Frage
The process of monitoring a SQL Server installation, identifying bottlenecks, implementing corrective action and monitoring the affects is called the [blank_start]Performance[blank_end] [blank_start]Tuning[blank_end] [blank_start]Cycle[blank_end].
Antworten
  • Performance
  • Tuning
  • Cycle

Frage 56

Frage
When making changes to a SQL Server configuration or schema, it's important to understand the following concerns.. Will there be a significant increase to the [blank_start]user[blank_end] base? Will there be a significant increase the [blank_start]processing[blank_end] [blank_start]load[blank_end] on the server? Will there be a significant increase in data [blank_start]volume[blank_end]?
Antworten
  • user
  • processing
  • load
  • volume

Frage 57

Frage
[blank_start]Power[blank_end] [blank_start]configuration[blank_end] describes the way the operating system behaves regarding power consumption. Each Windows operating system comes with three default options... [blank_start]Balanced[blank_end]—This setting matches capacity to demand. [blank_start]High[blank_end] [blank_start]Performance[blank_end]—This setting increases the performance of the CPU / hardware at the expense of energy consumption. [blank_start]Power[blank_end] [blank_start]Saver[blank_end]—This setting limits performance to save energy and reduce operating costs.
Antworten
  • Power
  • configuration
  • Balanced
  • High
  • Performance
  • Power
  • Saver

Frage 58

Frage
If your SQL Server is running on a Windows Server with the Power Saver power configuration option - what impact will that have on your server?
Antworten
  • The Frequency of your CPU's is capped so it will run slower
  • The Frequency of your CPU's is capped, when detecting low workload
  • The Fans will will run slower, increasing CPU temperature resulting in a small increase in speed
  • The CPU's in the server will become overclocked
  • There will be no effect

Frage 59

Frage
Leaving the Windows Server power configuration option to [blank_start]Balanced[blank_end] can cause an issue called [blank_start]core[blank_end] [blank_start]parking[blank_end]. This is when, instead of being used, certain [blank_start]cores[blank_end] from your CPU are not being used.
Antworten
  • Balanced
  • core
  • parking
  • cores

Frage 60

Frage
In SQL Server, you can skip the [blank_start]zero[blank_end] [blank_start]initialization[blank_end] of data files, database backups, and log backups to drastically increase the performance of these operations.
Antworten
  • zero
  • initialization

Frage 61

Frage
By adding the SQL Server [blank_start]Service[blank_end] [blank_start]Account[blank_end] to the 'Perform volume maintenance tasks' group in the Windows Local [blank_start]Account[blank_end] [blank_start]Policy[blank_end], you prevent [blank_start]zero[blank_end] initialising files occurring when they are created or extended (database files, backup files, log files etc) - and drastically increase the performance of those tasks.
Antworten
  • Service
  • Account
  • Account
  • Policy
  • zero

Frage 62

Frage
Planning, sizing, testing, and monitoring can provide you with the information you need to determine and monitor SQL Performance. You can break down this process into three steps: 1. Start by identifying your critical targets for [blank_start]CPU[blank_end] usage, [blank_start]memory[blank_end] allocation, and [blank_start]I/O[blank_end] throughput. 2. Create a [blank_start]baseline[blank_end] to measure against 3. Once deployed, monitor your critical measurements against your [blank_start]targets[blank_end].
Antworten
  • CPU
  • memory
  • I/O
  • baseline
  • targets

Frage 63

Frage
When planning the design and platform for your SQL Database you must consider the user load. Users usually fall into different groups based on either job function or feature usage and you need to know who is going to use the system. Particularly... 1. The [blank_start]number[blank_end] of users and their [blank_start]concurrency[blank_end] 2. Peak [blank_start]usage[blank_end] [blank_start]level[blank_end] rate 3. What [blank_start]tasks[blank_end] they are going to do resulting in procedure calls / data access.
Antworten
  • number
  • concurrency
  • usage
  • level
  • tasks

Frage 64

Frage
In order to monitor Server Performance, a Production DBA can use the following tools... Windows [blank_start]Task[blank_end] [blank_start]Manager[blank_end] —This gives a quick, high-level view of server performance and use of resources. System [blank_start]Performance[blank_end] [blank_start]Monitor[blank_end] —A detailed view of Server performance and per-instance SQL Server–specific counters. [blank_start]Management[blank_end] [blank_start]Data[blank_end] [blank_start]Warehouse[blank_end] (MDW) — A DB that collects Perfmon and Data Collector outputs for troubleshooting [blank_start]Dynamic[blank_end] [blank_start]Management[blank_end] [blank_start]Views[blank_end] (DMVs) — Sys objects that contain server state information for problem diagnosis
Antworten
  • Task
  • Manager
  • Performance
  • Monitor
  • Management
  • Data
  • Warehouse
  • Dynamic
  • Management
  • Views

Frage 65

Frage
The [blank_start]Virtual[blank_end] [blank_start]Memory[blank_end] [blank_start]Manager[blank_end] (VMM) is the part of the operating system that manages all the physical memory and shares it between all the processes that need memory on the system. Its job is to provide each process with memory when it needs it, although the physical memory is actually shared between all the processes running on the system at the same time. It uses a [blank_start]page[blank_end] [blank_start]file[blank_end] (one per hard drive partition) to store memory to disk when it's not directly needed. When a request is made for a piece of data in memory that cannot be read from actual memory, this is called a [blank_start]page[blank_end] [blank_start]fault[blank_end].
Antworten
  • Virtual
  • Memory
  • Manager
  • page
  • file
  • page
  • fault

Frage 66

Frage
Reducing VMM [blank_start]paging[blank_end] always results in better performance. If it occurs, for best performance, the [blank_start]page[blank_end] [blank_start]file[blank_end] should be on fast disks that have minimal disk usage activity, and the disks should be periodically [blank_start]defragmented[blank_end] to ensure that the [blank_start]page[blank_end] [blank_start]file[blank_end] is contiguous on the disks, reducing the disk head movement and increasing performance.
Antworten
  • paging
  • page
  • file
  • defragmented
  • page
  • file

Frage 67

Frage
The metric in the Windows System Monitor to measure paging and page file usage is... Paging file: % Usage - which, as a guide, should be less than [blank_start]70[blank_end] percent.
Antworten
  • 70

Frage 68

Frage
There are two kinds of [blank_start]page[blank_end] [blank_start]faults[blank_end] that can occur when SQL Server requests new memory - they are classified as [blank_start]soft[blank_end] (when a new piece of memory is required that's yet to be created) and [blank_start]hard[blank_end] (when a piece of memory has already been created, and has been offloaded to a [blank_start]page[blank_end] [blank_start]file[blank_end], and needs reloading). The [blank_start]hard[blank_end] type should be minimised as these cause poor performance.
Antworten
  • page
  • faults
  • soft
  • hard
  • page
  • file
  • hard

Frage 69

Frage
When building a SQL Server platform, it's generally considered that more disks are invariably faster than fewer disks. True or False?
Antworten
  • True
  • False

Frage 70

Frage
[blank_start]IOMeter[blank_end] is a commonly used tool for I/O subsystem measurement and characterization to baseline an I/O subsystem. It is both a workload generator and a measurement tool that can emulate a disk or network I/O load.
Antworten
  • IOMeter

Frage 71

Frage
When monitoring SQL Server performance, a commonly used disk performance metric is disk latency, which is measured by Windows System Performance Monitor using the Avg Sec/[blank_start]Read[blank_end], Avg Sec/[blank_start]Write[blank_end], and Avg Sec/[blank_start]Transfer[blank_end] counters. Target disk latencies are as follows: Database transaction log—Less than [blank_start]5[blank_end] ms : ideally [blank_start]0[blank_end]ms OLTP data—Less than [blank_start]10[blank_end]ms Decision Support Systems (OLAP and Reporting) data—Less than [blank_start]25[blank_end]ms
Antworten
  • Read
  • Write
  • Transfer
  • 5
  • 0
  • 10
  • 25

Frage 72

Frage
A RAID [blank_start]0[blank_end] set contains two or more disks, and the data is striped across all the disks. This RAID level provides no redundancy or fault tolerance because a disk failure destroys the array. During a write operation, the data is broken up into blocks, and the blocks are written onto the disks simultaneously. This increases bandwidth during read operations because multiple sections of the entire chunk of data are able to be read in parallel. However this does not implement any error checking. This is not recommended for any SQL Server volume.
Antworten
  • 0

Frage 73

Frage
With RAID [blank_start]1[blank_end], one disk is mirrored onto another—meaning two disks are needed to be configured in the RAID set. This is fast because reads can (but not always) occur from both disks, and writes incur minimal performance reduction. It provides redundancy from a disk failure, but increases storage costs because usage capacity is 50 percent of the available disk drives.
Antworten
  • 1

Frage 74

Frage
RAID [blank_start]10[blank_end] is a mirrored set in a striped set with a minimum of [blank_start]four[blank_end] disks. There will always be an even number of disks in the set. This is normally the fastest arrangement available.
Antworten
  • 10
  • four

Frage 75

Frage
Raid [blank_start]5[blank_end] is striping with parity with a minimum of three disks. During writes it must calculate the data parity—for example, for each write operation in a three-disk array, it writes data across two disks and parity across the third disk. The RAID firmware distributes the parity blocks across all the disks in the RAID set to avoid a write hot spot.
Antworten
  • 5

Frage 76

Frage
A RAID [blank_start]6[blank_end] set contains a minimum of four disks, and distributes two copies of the parity across the disks. This provides enhanced fault tolerance because two drive failures could occur without destroying the data on the array. This RAID implementation makes large RAID groups more practical because larger capacity drives extend the time needed to recover from a drive failure. However, this RAID level should not be used. It is actually Dual Parity and takes the parity region from RAID [blank_start]5[blank_end] and duplicates it so each disk has two parity regions. While this allows it to recover from the possible loss of two drives, it has a performance penalty as well.
Antworten
  • 6
  • 5

Frage 77

Frage
You should use fast, robust storage for SQL data files and SQL log files. In general, for most SQL Server implementations, the recommendation for both is to use [blank_start]striping[blank_end] with [blank_start]mirroring[blank_end] (RAID [blank_start]10[blank_end]). When unable to use this, the second preference would be RAID [blank_start]5[blank_end] especially If the nature of the database implementation is to service an application that has a high number of [blank_start]read[blank_end] operations compared to [blank_start]writes[blank_end], or the database is configured to be [blank_start]read[blank_end]-[blank_start]only[blank_end].
Antworten
  • striping
  • mirroring
  • 10
  • 5
  • read
  • writes
  • read
  • only

Frage 78

Frage
For critical systems, the operating system and SQL binary files should be on a [blank_start]mirrored[blank_end] disk array, but it needs to be only a single [blank_start]mirrored[blank_end] pair.
Antworten
  • mirrored
  • mirrored

Frage 79

Frage
The goal of performance tuning SQL Server 2014 is to minimize the response time for each SQL statement and increase system throughput. This can maximize the scalability of the entire database server by reducing [blank_start]network[blank_end]-[blank_start]traffic[blank_end] latency, as well as optimizing [blank_start]disk[blank_end] [blank_start]I/O[blank_end] throughput and [blank_start]CPU[blank_end] processing time.
Antworten
  • network
  • traffic
  • disk
  • I/O
  • CPU

Frage 80

Frage
Different demands are made by an [blank_start]online[blank_end] [blank_start]transaction[blank_end] [blank_start]processing[blank_end] (OLTP) environment than are made by a [blank_start]decision[blank_end] [blank_start]support[blank_end] (DSS) environment. A DSS environment often needs a heavily optimized [blank_start]I/O[blank_end] subsystem to keep up with the massive amounts of data retrieval (or reads) it performs. An OLTP transactional environment needs an [blank_start]I/O[blank_end] subsystem optimized for more of a balance between [blank_start]read[blank_end]-and-[blank_start]write[blank_end] operations.
Antworten
  • online
  • transaction
  • processing
  • decision
  • support
  • I/O
  • I/O
  • read
  • write

Frage 81

Frage
Each SQL Server database is made up of potentially three file types... a [blank_start]primary[blank_end] data file (.[blank_start]mdf[blank_end]) one or more [blank_start]secondary[blank_end] data files (.[blank_start]ndf[blank_end]) [blank_start]transaction[blank_end] [blank_start]log[blank_end] files (.[blank_start]ldf[blank_end]).
Antworten
  • primary
  • mdf
  • secondary
  • ndf
  • transaction
  • log
  • ldf

Frage 82

Frage
To maximize SQL Server performance gain, make sure you place the individual [blank_start]data[blank_end] files and the [blank_start]log[blank_end] files all on separate physical [blank_start]Logical[blank_end] [blank_start]Unit[blank_end] [blank_start]Numbers[blank_end] (LUNs). You can place reference-archived data or data that is rarely updated in a [blank_start]read[blank_end]-[blank_start]only[blank_end] filegroup. This filegroup can then be placed on slower disk drives (LUNs) because it is not used very often. This frees up disk space and resources so that the rest of the database may perform better.
Antworten
  • data
  • log
  • Logical
  • Unit
  • Numbers
  • read
  • only

Frage 83

Frage
Because database file location is so important to I/O performance, you should consider functional changes to the [blank_start]tempdb[blank_end] database when you create your primary data-file placement strategy. The reason for this is that this database's performance has a rather large impact on system performance because it is the most dynamic database on the system, and needs to be the quickest.
Antworten
  • tempdb

Frage 84

Frage
To avoid timeouts, you should set the autogrow operation (especially for the [blank_start]tempdb[blank_end] database) to a growth rate that is appropriate for your environment. In general, you should set the growth rate to a number that will be large enough to allow normal query activity to continue without other growth iterations. This should be in a minimum of [blank_start]1[blank_end]GB increments. If you have instant [blank_start]database[blank_end] [blank_start]file[blank_end] initialization turned on, the typical blocking that occurs for data file growths should not occur.
Antworten
  • tempdb
  • 1
  • database
  • file

Frage 85

Frage
If you notice your [blank_start]tempdb[blank_end] files growing in between restarts, make the default [blank_start]file[blank_end] [blank_start]size[blank_end] equivalent to the highest level of growth. Be sure to make every file the same size.
Antworten
  • tempdb
  • file
  • size

Frage 86

Frage
When configuring the [blank_start]tempdb[blank_end] database, pre-allocate space for the database files based on the results of monitoring growth of normal operation, however to prevent SQL Server from stopping, leave [blank_start]autogrow[blank_end] enabled in case [blank_start]tempdb[blank_end] runs out of space
Antworten
  • tempdb
  • autogrow
  • tempdb

Frage 87

Frage
Per SQL Server instance, as a rule of thumb, create one [blank_start]tempdb[blank_end] data file per [blank_start]CPU[blank_end] or [blank_start]processor[blank_end] [blank_start]core[blank_end], all equal in size up to [blank_start]eight[blank_end] data files
Antworten
  • tempdb
  • CPU
  • processor
  • core
  • eight

Frage 88

Frage
Best practices dictate that you must ensure that [blank_start]tempdb[blank_end] is in [blank_start]simple[blank_end] [blank_start]recovery[blank_end] mode, which enables space recovery
Antworten
  • tempdb
  • simple
  • recovery

Frage 89

Frage
In SQL Server terms, [blank_start]partitioning[blank_end] is the breaking up of a large object (such as a table) into smaller, manageable pieces. A [blank_start]row[blank_end] is the unit on which [blank_start]partitioning[blank_end] is based.
Antworten
  • partitioning
  • row
  • partitioning

Frage 90

Frage
As systems have become increasingly faster and more powerful, the preferred method of [blank_start]partitioning[blank_end] has become to use the SQL Server capability to [blank_start]partition[blank_end] database tables and their indexes over filegroups within a single database. This generally supercedes the use of [blank_start]Distributed[blank_end] [blank_start]Portioned[blank_end] [blank_start]Views[blank_end] (DPVs). One important note is that [blank_start]partitioning[blank_end] requires an [blank_start]Enterprise[blank_end] Edition of SQL Server, whereas DPVs does not.
Antworten
  • partitioning
  • partition
  • Distributed
  • Portioned
  • Views
  • partitioning
  • Enterprise

Frage 91

Frage
If you choose to use partitioning, data size matters. You should not partition a table that is less than [blank_start]25[blank_end]GB. You may not see performance benefits if the table is too small, and may even create additional overhead.
Antworten
  • 25

Frage 92

Frage
When implementing partitioning within SQL Server, you must create a partition [blank_start]function[blank_end] in order to provide the contiguous boundary ranges of each partition. It should be declared as either [blank_start]left[blank_end] or [blank_start]right[blank_end] - which determines whether each value declared is the lower or upper value range for the partition.
Antworten
  • function
  • left
  • right

Frage 93

Frage
If you wished to create a partition function called SplitYears that, based upon a Date & Time, would store data in the following partitions... 2005 - 2006 , 2007 - 2008, 2009 - 2010, 2011, 2012+ - you would use the following syntax... CREATE [blank_start]PARTITION[blank_end] [blank_start]FUNCTION[blank_end] [blank_start]SplitYears[blank_end] ([blank_start]datetime[blank_end]) AS [blank_start]RANGE[blank_end] [blank_start]LEFT[blank_end] FOR VALUES ( '20050101 00:00:00.000', '20070101 00:00:00.000','20090101 00:00:00.000', '20110101 00:00:00.000', '20120101 00:00:00.000')
Antworten
  • PARTITION
  • FUNCTION
  • SplitYears
  • datetime
  • RANGE
  • LEFT

Frage 94

Frage
As a best practice, [blank_start]user[blank_end] objects should be created and mapped to a filegroup outside of the [blank_start]primary[blank_end] filegroup, leaving the that filegroup for [blank_start]system[blank_end] objects only. This ensures database availability if an outage occurs that affects the availability of any filegroup outside of the [blank_start]primary[blank_end] filegroup.
Antworten
  • user
  • primary
  • system
  • primary

Frage 95

Frage
A partition [blank_start]scheme[blank_end] is what maps database objects such as a table to a physical entity such as a filegroup, and subsequently to a file. Creating one involves declaring a mapping for the partition [blank_start]function[blank_end] used, to individual filegroups.
Antworten
  • scheme
  • function

Frage 96

Frage
When used in production, [blank_start]compression[blank_end] leads to better [blank_start]I/O[blank_end] utilization because fewer reads and read-ahead reads are required. Data is [blank_start]compressed[blank_end] on [blank_start]disk[blank_end] and in [blank_start]memory[blank_end] - and this allows for a more efficient use of buffer pool resources.
Antworten
  • compression
  • I/O
  • compressed
  • disk
  • memory

Frage 97

Frage
In SQL Server Enterprise Edition - Compression can be used on the following objects...
Antworten
  • User Tables
  • System Tables
  • Clustered Indexes
  • Non Clustered Indexes
  • Index Views
  • Partitioned tables and indexes where each partition can have a different compression setting

Frage 98

Frage
In SQL Server 2014 Enterprise Edition, if you wanted to modify a table to use Row Compression, you would execute the following statement... ALTER [blank_start]TABLE[blank_end] MyTable [blank_start]REBUILD[blank_end] WITH ( [blank_start]DATA_COMPRESSION[blank_end] = [blank_start]ROW[blank_end] )
Antworten
  • TABLE
  • REBUILD
  • DATA_COMPRESSION
  • ROW

Frage 99

Frage
In SQL Server 2014 Enterprise Edition, if you wanted to modify a table to use Page Compression, you would execute the following statement... ALTER [blank_start]TABLE[blank_end] MyTable [blank_start]REBUILD[blank_end] WITH ( [blank_start]DATA_COMPRESSION[blank_end] = [blank_start]PAGE[blank_end] )
Antworten
  • TABLE
  • REBUILD
  • DATA_COMPRESSION
  • PAGE

Frage 100

Frage
SQL Server 2014 has a special algorithm for scheduling user processes using the SQL Operating System. It manages one [blank_start]scheduler[blank_end] per one logical [blank_start]core[blank_end] for user processes. There are two types of these within SQL Server: [blank_start]Hidden[blank_end] [blank_start]schedulers[blank_end] are used by internal SQL Server processes. There is currently a 1:1 ratio to logical core plus four additional schedulers used to monitor internal processes for AlwaysOn, the Dedicated Administrator Connection, and In-Memory OLTP. [blank_start]Visible[blank_end] [blank_start]schedulers[blank_end] are available for user queries and user processes. The SQLOS is the internal operating system built in SQL Server. It provisions CPU utilization, thread scheduling, task execution, and memory distribution.
Antworten
  • scheduler
  • core
  • Hidden
  • schedulers
  • Visible
  • schedulers

Frage 101

Frage
For reasons of data integrity, only one processor can update any piece of data at a time. Other processors that have copies in their caches can have their local copy “invalidated” and thus must be reloaded. This mechanism is referred to as [blank_start]cache[blank_end] [blank_start]coherency[blank_end], which requires that all the caches are in agreement regarding the location of all copies of the data, and which processor currently has permission to perform the update.
Antworten
  • cache
  • coherency

Frage 102

Frage
By default, the [blank_start]Max[blank_end] [blank_start]Degree[blank_end] of [blank_start]Parallelism[blank_end] ([blank_start]MAXDOP[blank_end]) value is set to [blank_start]0[blank_end], which enables SQL Server to consider all processors when creating an execution plan. In most systems, setting this to a value equivalent to the number of [blank_start]cores[blank_end] in one NUMA node is recommended. This limits the overhead introduced by implementing this..
Antworten
  • Max
  • Degree
  • Parallelism
  • MAXDOP
  • 0
  • cores

Frage 103

Frage
When using System Performance Monitor, the SQL Server [blank_start]Cache[blank_end] [blank_start]Hit[blank_end] [blank_start]Ratio[blank_end] counter signifies the balance between servicing user requests from data in the data cache (memory) and having to request data from the I/O subsystem (disk). Ideally, this value should be well over [blank_start]90[blank_end] percent.
Antworten
  • Cache
  • Hit
  • Ratio
  • 90

Frage 104

Frage
When monitoring SQL Server using the System Performance Monitor - another reliable indicator of instance memory pressure is the SQL Server : Buffer Manager : [blank_start]Page[blank_end] [blank_start]life[blank_end] [blank_start]expectancy[blank_end] (PLE) counter. This counter indicates the amount of time that a buffer page remains in memory (in seconds) - this should not fall below a calculated threshold. The rule of thumb to calculating the threshold is to calculate it using the following formula... MaxSQLServerMemory(GB) x [blank_start]75[blank_end].
Antworten
  • Page
  • life
  • expectancy
  • 75

Frage 105

Frage
Be careful not to under-allocate total system memory, because it forces the operating system to start moving page faults to a physical disk called [blank_start]paging[blank_end]. When excessive [blank_start]paging[blank_end] takes places, it uses disk [blank_start]I/O[blank_end] and [blank_start]CPU[blank_end] resources, which can introduce [blank_start]latency[blank_end] in the overall server, resulting in slower database performance. You can identify a lack of adequate system memory by monitoring the Memory: [blank_start]Pages[blank_end] / sec performance counter. It should be as close to [blank_start]0[blank_end] as possible, because a higher value indicates that more hard-paging is taking place
Antworten
  • paging
  • paging
  • I/O
  • CPU
  • latency
  • Pages
  • 0
Zusammenfassung anzeigen Zusammenfassung ausblenden

ähnlicher Inhalt

System Analysis
R A
CCNA Security 210-260 IINS - Exam 3
Mike M
SQL Server 2014 - Part 1
Paul Doman
Application of technology in learning
Jeff Wall
Innovative Uses of Technology
John Marttila
Ch1 - The nature of IT Projects
mauricio5509
The Internet
Gee_0599
SQL Quiz
R M
CCNA Answers – CCNA Exam
Abdul Demir
Professional, Legal, and Ethical Issues in Information Security
mfundo.falteni
Flash Cards Networks
JJ Pro Wrestler