Zusammenfassung der Ressource
Frage 1
Frage
What does HDFS stands for?
Frage 2
Frage
What is Data Replication in HDFS?
Antworten
-
Copy the dataset to a different folder within the same data node
-
Copy the dataset outside of the folder within the Same Data Node
-
Copy the dataset to a different Data Node within HDFS
-
Password protect the dataset
Frage 3
Frage
What is minimum recommended replication factor for HDFS?
Antworten
-
Same as number of data nodes in the cluster
-
Same as number of name nodes, including Primary and Secondary
-
3
-
5
Frage 4
Frage
Since the data is replicated thrice in HDFS, does it mean that any calculation done on one node will also be replicated on the other two?
Frage 5
Frage
What is a Name Node?
Antworten
-
Super Master Node in Hadoop. Everything else runs within it.
-
Master Node for HDFS System
-
Master Node for MapReduce & HDFS both
-
None of the above
Frage 6
Frage
What is a Data Node?
Antworten
-
Node that tracks metadata about the data
-
Slave/Worker Node that stores actual Data
-
Cluster nodes that provide SQL access
-
Backup nodes for HDFS data storage
Frage 7
Frage
A daemon is a Linux process that reflects an ongoing computing process running in the background
Frage 8
Frage
What is a "HeartBeat"?
Antworten
-
You don't know what a heartbeat is????
-
A process where only two name nodes can communicate with each other
-
A process that enables multiple MapReduce programs to share the same memory container
-
A process that enables two servers to confirm that they are alive and active
Frage 9
Frage
What is a Data Block in HDFS?
Antworten
-
An instance of file stored on one Data node is called a Block
-
HDFS breaks file into exact 3 parts. Each part is called a Block
-
A ‘block’ is the minimum amount of data that can be read or written
-
A block means amount of storage pre-allocated to a folder in HDFS
Frage 10
Frage
Default Size of data block in Hadoop 2.00 is "128MB". This is a fixed size and can not be customized.
Frage 11
Frage
How does Hadoop know if a Data Node is Full?
Antworten
-
Data node will crash and you will know that it was full...
-
Since there are multiple data nodes, there is no need to know this
-
Name Node contains the metadata and status information of each data nodes and it can figure out which data node is full
-
Hadoop has a database repository. This repository keeps track of storage threshold. Name node queries this repository to find out which data node is full and which one is not
Frage 12
Frage
Who engineered "DFS" or Distributed File System that HDFS is based on?
Antworten
-
Google
-
Amazon
-
Apple
-
Facebook
Frage 13
Frage
What component can be used to perform basic HDFS operations using a user interface?
Antworten
-
Apache Hue
-
Cloudera Manager
-
Ambari
-
Impala
Frage 14
Frage
Apache Hue is "Owned" by Cloudera
Frage 15
Frage
What is a "Client" in the context of HDFS?
Antworten
-
A user (like you and me) who is writing the file
-
A program that communicates with HDFS to perform file operations
-
All Cloudera Applications like Cloudera manager....
-
All of the above
Frage 16
Frage
A Secondary Name node is substitute to the Primary Name Node.
Frage 17
Frage
A Secondary Name node knows everything that a Primary Name node knows because...
Antworten
-
They are load balanced
-
There disk is located on a common shared drive that both servers can access at the same time
-
A disk Sync program makes sure that secondary has current information about the Primary
-
Secondary constantly reads Primary's RAM
Frage 18
Frage
Apache Hue is the only way to perform file operations on HDFS