Ricardo Camilo
Test por , creado hace más de 1 año

Quiz on IBM Datastage v11.5, created by Ricardo Camilo on 07/03/2018.

781
0
0
Ricardo Camilo
Creado por Ricardo Camilo hace más de 6 años
Cerrar

IBM Datastage v11.5

Pregunta 1 de 50

1

In your ETL application design you have found several areas of common processing requirements int he mapping specifications.These common logic areas found include : code validation lookups and name formatting.The common logic areas have the same logic areas in your ETL application ?

Selecciona una de las siguientes respuestas posibles:

  • A. Create parallel routines for each of the common logic areas and for each of the unique column metadata formats.

  • B. Create separate jobs for each layout and choose the appropriate job to run within a job sequencer.

  • C. Create parallel shared containers and define columns combining all data formats.

  • D. Create parallel shared containers with Runtime Column Propagation (RCP) ON and define only necessary common columns needed for thelogic."

Explicación

Pregunta 2 de 50

1

When optimizing a job, Balanced Optimization will NOT search the job for what pattern?

Selecciona una de las siguientes respuestas posibles:

  • A. Links

  • B. Stages

  • C. Sequencers

  • D. Property Settings

Explicación

Pregunta 3 de 50

1

Your job sequence must be restartable. It runs Job1, Job2, and Job3 serially. It has been compiled with "Add checkpoints so sequence is restartable". Job1 must execute every run even after a failure. Which two properties must be selected to ensure that Job1 is run each time, even after a failure? (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. Set the Job1 Activity stage to ""Do not checkpoint run"".

  • B. Set trigger on the Job1 Activity stage to ""Unconditional"".

  • C. In the Job1 Activity stage set the Execution action to ""Run"".

  • D. In the Job1 Activity stage set the Execution action to ""Reset if required, then run"".

  • E. Use the Nested Condition Activity with a trigger leading to Job1; set the trigger expression type to ""Unconditional"""

Explicación

Pregunta 4 de 50

1

You would like to pass values into parameters that will be used in a variety of downstream activity stages within a job sequence. What are two valid ways to do this? (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. Use local parameters.

  • B. Place a parameter set stage on the job sequence.

  • C. Add a Transformer stage variable to the job sequence canvas.

  • D. Check the ""Propagate Parameters"" checkbox in the Sequence Job properties.

  • E. Use the UserVariablesActivity stage to populate the local parameters from an outside source such as a file."

Explicación

Pregunta 5 de 50

1

On the DataStage development server, you have been making enhancements to a copy of a DataStage job running on the production server. You have been asked to document the changes you have made to the job. What tool in DataStage Designer would you use?

Selecciona una de las siguientes respuestas posibles:

  • "A. Compare Against

  • B. diffapicmdline.exe

  • C. DSMakeJobReport

  • D. Cross Project Compare"

Explicación

Pregunta 6 de 50

1

Your customer is using Source Code Control Integration for Information server and have tagged artifacts for version 1. You must create a deployment package from the version 1. Before you create the package you will have to ensure the project is up to date with version 1. What two things must you do to update the meta-data repository with the artifacts tagged as version 1? (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. Right-click the asset and click the Deploy command.

  • B. Right-click the asset and click the Team Import command.

  • C. Right-click the asset and click Update From Source Control Workspace.

  • D. Right-click the asset and click Replace From Source Control Workspace.

  • E. Right-click the asset and click the Team command to update the Source Control Workspace with the asset."

Explicación

Pregunta 7 de 50

1

What two features distinguish the Operations Console from the Director job log? (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. Jobs can be started and stopped in Director, but not in the Operations Console.

  • B. The Operations Console can monitor jobs running on only one DataStage engine.

  • C. Workload management is supported within Director, but not in the Operations Console.

  • D. The Operations Console can monitor jobs running on more than one DataStage engine.

  • E. The Operations Console can run on systems where the DataStage clients are not installed"

Explicación

Pregunta 8 de 50

1

The Score is divided into which two sections? (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. Stages

  • B. File sets

  • C. Schemas

  • D. Data sets

  • E. Operators"

Explicación

Pregunta 9 de 50

1

A job validates account numbers with a reference file using a Join stage, which is hash partitioned by account number. Runtime monitoring reveals that some partitions process many more rows than others. Assuming adequate hardware resources, which action can be used to improve the performance of the job?

Selecciona una de las siguientes respuestas posibles:

  • "A. Replace the Join with a Merge stage.

  • B. Change the number of nodes in the configuration file.

  • C. Add a Sort stage in front of the Join stage. Sort by account number.

  • D. Use Round Robin partitioning on the stream and Entire partitioning on the reference."

Explicación

Pregunta 10 de 50

1

Which option is required to identify a particular job player processes?Which option is required to identify a particular job? player processes?

Selecciona una de las siguientes respuestas posibles:

  • "A. Set $APT_DUMP_SCORE to true.

  • B. Set $APT_PM_SHOW_PIDS to true.

  • C. Log onto the server and issue the command ""ps -ef | grep ds"".

  • D. Use the DataStage Director Job administration screen to display active player processes."

Explicación

Pregunta 11 de 50

1

Which two parallel job stages allow you to use partial schemas? (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. Peek stage

  • B. File Set stage

  • C. Data Set stage

  • D. Column Export stage

  • E. External Target stage"

Explicación

Pregunta 12 de 50

1

What are the two Transfer Protocol Transfer Mode property options for the FTP Enterprise stage? (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. FTP

  • B. EFTP

  • C. TFTP

  • D. SFTP

  • E. RFTP"

Explicación

Pregunta 13 de 50

1

Identify the two statements that are true about the functionality of the XML Pack 3.0. (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. XML Stages are Plug-in stages.

  • B. XML Stage can be found in the Database folder on the palette.

  • C. Uses a unique custom GUI interface called the Assembly Editor.

  • D. It includes the XML Input, XML Output, and XML Transformer stages.

  • E. A single XML Stage, which can be used as a source, target, or transformation"

Explicación

Pregunta 14 de 50

1

When using a Sequential File stage as a source what are the two reject mode property options? (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. Set

  • B. Fail

  • C. Save

  • D. Convert

  • E. Continue"

Explicación

Pregunta 15 de 50

1

Which two statements are true about Data Sets? (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. Data sets contain ASCII data.

  • B. Data Sets preserve partitioning.

  • C. Data Sets require repartitioning.

  • D. Data Sets represent persistent data.

  • E. Data Sets require import/export conversions"

Explicación

Pregunta 16 de 50

1

What is the correct method to process a file containing multiple record types using a Complex Flat File stage?

Selecciona una de las siguientes respuestas posibles:

  • "A. Flatten the record types into a single record type.

  • B. Manually break the file into multiple files by record type.

  • C. Define record definitions on the Constraints tab of the Complex Flat File stage.

  • D. Load a table definition for each record type on the Records tab of the Complex Flat File stage."

Explicación

Pregunta 17 de 50

1

Which two file stages allow you to configure rejecting data to a reject link? (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. Data Set Stage

  • B. Compare Stage

  • C. Big Data File Stage

  • D. Lookup File Set Stage

  • E. Complex Flat File Stage"

Explicación

Pregunta 18 de 50

1

"A customer must compare a date column with a job parameter date to determine which output links the row belongs on. What stage should be used for this
requirement?"

Selecciona una de las siguientes respuestas posibles:

  • "A. Filter stage

  • B. Switch stage

  • C. Compare stage

  • D. Transformer stage"

Explicación

Pregunta 19 de 50

1

"Rows of data going into a Transformer stage are sorted and hash partitioned by the Input.Product column. Using stage variables, how can you determine when a
new row is the first of a new group of Product rows?"

Selecciona una de las siguientes respuestas posibles:

  • "A. Create a stage variable named sv_IsNewProduct and follow it by a second stage variable named sv_Product. Map the Input.Product column tosv_Product. The
    derivation for sv_IsNewProduct is: IF Input.Product = sv_Product THEN ""YES"" ELSE ""NO"".

  • B. Create a stage variable named sv_IsNewProduct and follow it by a second stage variable named sv_Product. Map the Input.Product column tosv_Product. The
    derivation for sv_IsNewProduct is: IF Input.Product <> sv_Product THEN ""YES"" ELSE ""NO"".

  • C. Create a stage variable named sv_Product and follow it by a second stage variable named sv_IsNewProduct. Map the Input.Product column tosv_Product. The
    derivation for sv_IsNewProduct
    is: IF Input.Product = sv_Product THEN ""YES"" ELSE ""NO"".

  • D. Create a stage variable named sv_Product and follow it by a second stage variable named sv_IsNewProduct. Map the Input.Product column tosv_Product. The
    derivation for sv_IsNewProduct
    is: IF Input.Product <> sv_Product THEN ""YES"" ELSE ""NO"""

Explicación

Pregunta 20 de 50

1

Which statement describes what happens when Runtime Column Propagation is disabled for a parallel job?

Selecciona una de las siguientes respuestas posibles:

  • "A. An input column value flows into a target column only if it matches it by name.

  • B. An input column value flows into a target column only if it is explicitly mapped to it.

  • C. You must set APT_AUTO_MAP project environment to true to allow output link mapping to occur.

  • D. An input column value flows into a target column based on its position in the input row. For example, first column in the input row goes into thefirst target
    column"

Explicación

Pregunta 21 de 50

1

Which statement is true when using the SaveInputRecord() function in a Transformer stage.

Selecciona una de las siguientes respuestas posibles:

  • "A. You can only use the SaveInputRecord() function in Loop variable derivations.

  • B. You can access the saved queue records using Vector referencing in Stage variable derivations.

  • C. You must retrieve all saved queue records using the GetSavedInputRecord() function within Loop variable derivations.

  • D. You must retrieve all saved queue records using the GetSavedInputRecord() function within Stage variable derivations."

Explicación

Pregunta 22 de 50

1

Which derivations are executed first in the Transformer stage?

Selecciona una de las siguientes respuestas posibles:

  • "A. Input column derivations

  • B. Loop variable derivations

  • C. Stage variable derivations

  • D. Output column derivations"

Explicación

Pregunta 23 de 50

1

In a Transformer, which two mappings can be handled by default type conversions. (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. Integer input column mapped to raw output column.

  • B. Date input column mapped to a string output column.

  • C. String input column mapped to a date output column.

  • D. String input column mapped to integer output column.

  • E. Integer input column mapped to string output column."

Explicación

Pregunta 24 de 50

1

Identify two different types of custom stages you can create to extend the Parallel job syntax? (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. Input stage

  • B. Basic stage

  • C. Group stage

  • D. Custom stage

  • E. Wrapped stage"

Explicación

Pregunta 25 de 50

1

What is the purpose of the APT_DUMP_SCORE environment variable?

Selecciona una de las siguientes respuestas posibles:

  • "A. There is no such environment variable.

  • B. It is an environment variable that turns on the job monitor.

  • C. It is an environment variable that enables the collection of runtime performance statistics.

  • D. It is a reporting environment variable that adds additional runtime information in the job log."

Explicación

Pregunta 26 de 50

1

Which two data repositories can be used for user authentication within the Information Server Suite? (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. IIS Web Console

  • B. IBM Metadata repository

  • C. Standalone LDAP registry

  • D. Operations Console database

  • E. IBM Information Server user directory"

Explicación

Pregunta 27 de 50

1

Which two statements are true about the use of named node pools? (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. Grid environments must have named node pools for data processing.

  • B. Named node pools can allow separation of buffering from sorting disks.

  • C. When named node pools are used, DataStage uses named pipes between stages.

  • D. Named node pools limit the total number of partitions that can be specified in the configuration file.

  • E. Named node pools constraints will limit stages to be executed only on the nodes defined in the node pools"

Explicación

Pregunta 28 de 50

1

Which step is required to change from a normal lookup to a sparse lookup in an ODBC Connector stage?

Selecciona una de las siguientes respuestas posibles:

  • "A. Change the partitioning to hash.

  • B. Sort the data on the reference link.

  • C. Change the lookup option in the stage properties to ""Sparse"".

  • D. Replace columns at the beginning of a SELECT statement with a wildcard asterisk (*)."

Explicación

Pregunta 29 de 50

1

Which two pieces of information are required to be specified for the input link on a Netezza Connector stage? (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. Partitioning

  • B. Server name

  • C. Table definitions

  • D. Buffering settings

  • E. Error log directory"

Explicación

Pregunta 30 de 50

1

Which requirement must be met to read from a database in parallel using the ODBC connector?

Selecciona una de las siguientes respuestas posibles:

  • "A. ODBC connector always reads in parallel.

  • B. Set the Enable partitioning property to Yes.

  • C. Configure environment variable $APT_PARTITION_COUNT.

  • D. Configure environment variable $APT_MAX_TRANSPORT_BLOCK_SIZE"

Explicación

Pregunta 31 de 50

1

Configuring the weighting column of an Aggregator stage affects which two options. (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. Sum

  • B. Maximum Value

  • C. Average of Weights

  • D. Coefficient of Variation

  • E. Uncorrected Sum of Squares"

Explicación

Pregunta 32 de 50

1

The parallel framework was extended for real-time applications. Identify two of these aspects. (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. XML stage.

  • B. End-of-wave.

  • C. Real-time stage types that re-run jobs.

  • D. Real-time stage types that keep jobs always up and running.

  • E. Support for transactions within source database connector stages."

Explicación

Pregunta 33 de 50

1

How must the input data set be organized for input into the Join stage? (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. Unsorted

  • B. Key partitioned

  • C. Hash partitioned

  • D. Entire partitioned

  • E. Sorted by Join key"

Explicación

Pregunta 34 de 50

1

"The Change Apply stage produces a change Data Set with a new column representing the code for the type of change. What are two change values identified by
these code values? (Choose two.)"

Selecciona una o más de las siguientes respuestas posibles:

  • "A. Edit

  • B. Final

  • C. Copy

  • D. Deleted

  • E. Remove Duplicates"

Explicación

Pregunta 35 de 50

1

What stage allows for more than one reject link?

Selecciona una de las siguientes respuestas posibles:

  • "A. Join stage

  • B. Merge stage

  • C. Lookup stage

  • D. Funnel stage"

Explicación

Pregunta 36 de 50

1

Which statement is correct about the Data Rules stage?

Selecciona una de las siguientes respuestas posibles:

  • "A. The Data Rules stage works with rule definitions only; not executable rules.

  • B. As a best practice, you should create and publish new rules from the Data Rules stage.

  • C. If you have Rule Creator role in InfoSphere Information Analyzer, you can create and publish rule definitions and rule set definitions directlyfrom the stage itself.

  • D. When a job that uses the Data Rules stage runs, the output of the stage is passed to the downstream stages and results are stored in theAnalysis Results
    database (IADB)."

Explicación

Pregunta 37 de 50

1

Which job design technique can be used to give unique names to sequential output files that are used in multi-instance jobs?

Selecciona una de las siguientes respuestas posibles:

  • "A. Use parameters to identify file names.

  • B. Generate unique file names by using a macro.

  • C. Use DSJoblnvocationID to generate a unique filename.

  • D. Use a Transformer stage variable to generate the name"

Explicación

Pregunta 38 de 50

1

The ODBC stage can handle which two SQL Server data types? (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. Date

  • B. Time

  • C. GUID

  • D. Datetime

  • E. SmallDateTime"

Explicación

Pregunta 39 de 50

1

Which DB2 to InfoSphere DataStage data type conversion is correct when reading data with the DB2 Connector stage?

Selecciona una de las siguientes respuestas posibles:

  • "A. XML to SQL_WVARCHAR

  • B. BIGINT to SQL_BIGINT (INT32)

  • C. VARCHAR, 32768 to SQL_VARCHAR

  • D. CHAR FOR BIT DATA to SQL_VARBINARY"

Explicación

Pregunta 40 de 50

1

Which Oracle data type conversion is correct?

Selecciona una de las siguientes respuestas posibles:

  • "A. Oracle data type RAW converts to RAW in Oracle Connector stage.

  • B. Oracle data type NUMBER(6,0) converts to INT32 in Oracle Connector stage.

  • C. Oracle data type NUMBER(15,0) converts to INT32 in Oracle Connector stage.

  • D. Oracle data type NUMBER converts to DECIMAL(38,0) in Oracle Connector stage"

Explicación

Pregunta 41 de 50

1

Which two statements about using a Load write method in an Oracle Connector stage to tables that have indexes on them are true? (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. Set the Upsert mode property to ""Index"".

  • B. Set the Index mode property to ""Bypass"".

  • C. The Load Write method uses the Parallel Direct Path load method.

  • D. The Load Write method uses ""Rebuild"" mode with no logging automatically.

  • E. Set the environment variable APT_ORACLE_LOAD_OPTIONS to ""OPTIONS (DIRECT=TRUE, PARALLEL=FALSE)"""

Explicación

Pregunta 42 de 50

1

Which Oracle Connector stage property can be set to tune job performance?

Selecciona una de las siguientes respuestas posibles:

  • "A. Array size

  • B. Memory size

  • C. Partition size

  • D. Transaction size"

Explicación

Pregunta 43 de 50

1

Identify two different types of custom stages you can create to extend the Parallel job syntax? (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. Input stage

  • B. Basic stage

  • C. Group stage

  • D. Custom stage

  • E. Wrapped stage"

Explicación

Pregunta 44 de 50

1

When using the loop functionality in a transformer, which statement is true regarding Transformer processing

Selecciona una de las siguientes respuestas posibles:

  • "A. Stage variables can be referenced in loop conditions.

  • B. Stage variables can be executed after loop variable expressions.

  • C. Loop variable expressions are executed before input link column expressions.

  • D. Output links can be excluded from being associated with a True loop condition."

Explicación

Pregunta 45 de 50

1

Which stage classifies data rows from a single input into groups and computes totals?

Selecciona una de las siguientes respuestas posibles:

  • "A. Modify stage

  • B. Compare stage

  • C. Aggregator stage

  • D. Transformer stage"

Explicación

Pregunta 46 de 50

1

Which statement describes a SCD Type One update in the Slowly Changing Dimension stage?

Selecciona una de las siguientes respuestas posibles:

  • "A. Adds a new row to the fact table.

  • B. Adds a new row to a dimension table.

  • C. Overwrites an attribute in the fact table.

  • D. Overwrites an attribute in a dimension table."

Explicación

Pregunta 47 de 50

1

Which derivations are executed last in the Transformer stage?

Selecciona una de las siguientes respuestas posibles:

  • "A. Input column derivations

  • B. Loop variable derivations

  • C. Output column derivations

  • D. Stage variable derivations"

Explicación

Pregunta 48 de 50

1

"The derivation for a stage variable is: Upcase(input_column1) : ' ' : Upcase(input_column2). Suppose that input_column1 contains a NULL value. Assume the
legacy NULL processing option is turned off.
Which behavior is expected?"

Selecciona una de las siguientes respuestas posibles:

  • "A. The job aborts.

  • B. NULL is written to the target stage variable.

  • C. The input row is either dropped or rejected depending on whether the Transformer has a reject link.

  • D. The target stage variable is populated with spaces or zeros depending on the stage variable data type."

Explicación

Pregunta 49 de 50

1

Which statement is true about table definitions created in DataStage Designer?

Selecciona una de las siguientes respuestas posibles:

  • "A. By default, table definitions created in DataStage Designer are visible to other Information Server products.

  • B. Table definitions created in DataStage Designer are local to DataStage and cannot be shared with other Information Server products.

  • C. When a table definition is created in one DataStage project, it is automatically available in other DataStage projects, but not outside ofDataStage.

  • D. Table definitions created in DataStage Designer are not by default available to other Information Server products, but they can be shared withother Information
    Server products.
    "

Explicación

Pregunta 50 de 50

1

What are two advantages of using Runtime Column Propagation (RCP)? (Choose two.)

Selecciona una o más de las siguientes respuestas posibles:

  • "A. RCP forces a developer to define all columns explicitly.

  • B. Only columns used in the data flow need to be defined.

  • C. Sequential files don't require schema files when using RCP.

  • D. Only columns that are defined as VarChar need RCP enabled.

  • E. Columns not specifically used in the flow are propagated as if they were.
    "

Explicación