Cloudera Hadoop Developer

Description

This is a preparation Test for Cloudera Hadoop Developer Certification.
kmellzie
Quiz by kmellzie, updated more than 1 year ago
kmellzie
Created by kmellzie over 9 years ago
80
0

Resource summary

Question 1

Question
Combiners increase the efficiency of a MapReduce program because
Answer
  • They provide a mechanism for different mappers to communicate with each other, thereby reducing synchronization overhead.
  • They provide an optimization and reduce the total number of computations that are needed to execute an algorithm by a factor of N, where N is the number of reducers.
  • They aggregate intermediate map output locally on each individual machine and therefore reduce the amount of data that needs to be shuffled across the network to the reducers.
  • They aggregate intermediate map output horn a small number of nearby (i.e. rack-local) machines and therefore reduce the amount of data that needs to be shuffled across the network of the reducers.

Question 2

Question
In a large MapReduce job with M mappers and R reducers, how many distinct copy operations will be there in the sort/shuffle phase?
Answer
  • M
  • R
  • M + R
  • M x R
  • M ** R

Question 3

Question
What happens in a MapReduce job when you set the number of reducers to one?
Answer
  • A single reducer gathers and processes all the output from all the mappers. The output is written in as many separate files as there are mappers.
  • A single reducer gathers and processes all the output from all the mappers. The output is written to a simple file in HDFS.
  • Setting the number of reducers to one creates a processing bottleneck, and since the number of reducers as specified by the programmer is used as a reference value only, the MapReduce runtime provides a default setting for the number of reducers.
  • Setting the number of reducers to one is invalid, and an exception is thrown.

Question 4

Question
In the standard Word-count MapReduce algorithm, why might using a combiner reduce the overall Job running time?
Answer
  • Because combiners perform local aggregation of word counts, thereby allowing the mappers to process input data faster
  • Because combiners perform local aggregation of word counts, thereby reducing the number of mappers that need to run
  • Because combiners perform local aggregation of word counts, and then transfer data to reducers without writing the intermediate data on disk
  • Because combiners perform local aggregation of word counts, thereby reducing the number of key-value pairs that need to be snuff across the network to the reducers

Question 5

Question
Which two of the following are valid statements? (Select two)
Answer
  • HDFS is optimized for storing a large number of files smaller than the HDFS block size
  • HDFS has the Characteristic of supporting a "write once, read many" data access model
  • HDFS is a distributed file system that replaces ext3 or ext4 on linux nodes in a Hadoop cluster
  • HDFS is a distributed file system that runs on top of native OS filesystems and is well suited to storage of very large data sets

Question 6

Question
You need to create a GUI application to help your company's sales people add and edit customer information. Would HDFS be appropiate for this customer information file?
Answer
  • Yes, because HDFS is optimized for random access writes
  • Yes, because HDFS is optimized for fast retrieval of relatively small amounts of data
  • No, because HDFS can only be accessed by MapReduce applications
  • No, because HDFS is optimized for write-once, streaming access for relatively large files

Question 7

Question
Which of the following describes how a client reads a file from HDFS
Answer
  • The client queries the NameNode for the block location(s). The NameNode returns the block location(s) to the client. The client reads the data directly off the DataNode(s)
  • The client queries all DataNodes in parallel. The DataNode that contains the requested data responds directly to the client. The client reads the data directly off the DataNode
  • The client contacts the NameNode for the block location(s). The NameNode then queries the DataNodes for block locations. The DataNodes respond to the NameNode, and the NameNode redirects the client to the DataNode that holds the requested data block(s). The client then reads the data directly off the DataNode
  • The client contacts the NamaNode for the block location(s). The NameNode contacts the DataNode that holds the requested data block. Data is transferred from the DataNode to the NameNode, and then from the NameNode to the client

Question 8

Question
Which of the following statements best describes how a large (100 GB) file stored in HDFS
Answer
  • The file is divided into variable size blocks, which are stored on multiple data nodes. Each block is replicated three times by default
  • The file is replicated three times by default. Each copy of the file is stored on a separate datanodes
  • The master copy of the file is stored on a sigle DataNode. The replica copies are divided into fixed-size blocks, which are stored on multiple DataNodes
  • The file is divided into fixed-sized blocks, which are stored on multiple DataNodes. Each block is replicated three times by default. Multiple blocks from the same file might reside on the same DataNode
  • The file is divided into fixed-size blocks, which are stored on multiple DataNodes. Each block is replicated three times by default. HDFS guarantees that different blocks from the same file are never on the same DataNode

Question 9

Question
Your cluster has 10 DataNodes, each with a single 1 TB hard drive. You utilize all your disk capacity for HDFS, reserving none for MapReduce. You implement default replication settings. What is the storage capacity of your Hadoop cluster (assuming no compression)
Answer
  • About 3 TB
  • About 5 TB
  • About 10 TB
  • About 11 TB
Show full summary Hide full summary

Similar

Examen Developer I - III
Jose Miguel Montalbán
Validación de formularios con Javascript
Alejandro Villamizar
Simulador de preparacion exmen certificacion JAVA 7 developer
JUAN JOSÈ ORJUEL
Big Data (Paradigma)
Juan Pablo Jiménez E
SCRUM DEVELOPER
Oscar Castro
Examen Spark-1
Jorge Lopez-Mall
Big Data VVV
Pascual Villarreal
3. Platform Developer 1: Logic and Process Automation Part 1
Laura Hdez
Ambientes de Desarrollo en Colombia
John E. Torres M.