Frage 1
Antworten
-
the term ___________ refers to a technology environment comprised of Big Data mechanisms and technology artifacts that serves as a platform for developing Big Data solutions
-
The ___________ concentrate on the design of the underlying technology architecture of common Big Data platforms
-
Other memory-resident datasets can also be incorporated for performing analytics that provide context-aware results
-
In the context of software systems, a _________ represents the fundamental design of a software system
Frage 2
Frage 3
Antworten
-
Streaming Source
-
Data Size Reduction
-
Alternative, the member patterns that comprise a __________ can represent a set of related features provided by a particular environment. In this case, a coexistent application of patterns establishes a "solution environment" that may be realized by a combination of tools and technologies
-
The _________ compound pattern represents a fundamental solution evironment comprised of a processing ______ with data ingress, storage, processing and egress capabilities
Frage 4
Frage 5
Frage 6
Antworten
-
Automatic Data Sharding
-
Large-Scale Batch Processing
-
To provide efficient storage of such data, the ____________ pattern can be applied to stipulate the use of a storage device in the form of a key-value NoSQL database that services insert, select and delete operations
-
Processing Abstraction
Frage 7
Antworten
-
Complex Logic Decomposition
-
Automated Dataset Execution
-
High Volume Tabular Storage
-
High Volume Linked Storage
Frage 8
Antworten
-
represent proven solutions to common problems
-
Big data ________ are (partially or entirely) applied by implementing different combinations of Big Data Mechanism
-
security layer
-
Note that the analysis layer may also encompass some level of data visualization features
Frage 9
Antworten
-
is a coarse-grained pattern comprised of a set of finer-grained patterns singled out in this catalog are some of the more common and important combinations of the patterns, each of which is classified as a __________
-
Each _________ is represented by a hierarchy comprised of core (required) member patterns and extension (optional) patterns
-
However, with the passage of time, as more Big Data solutions are built and their complexity increases, additional Big Data mechanisms are introduced
-
Random Access Storage
Frage 10
Antworten
-
Core patterns are connected via solid lines, and extension patterns are connected via dashed lines
-
document the effects of applying multiple patterns together
-
A _________ can represent a set of patterns that are applied together in order to establish a specific set of design characteristics. This would be referred to as joint application
-
Alternative, the member patterns that comprise a __________ can represent a set of related features provided by a particular environment. In this case, a coexistent application of patterns establishes a "solution environment" that may be realized by a combination of tools and technologies
Frage 11
Frage 12
Frage 13
Antworten
-
are comprised of specific combinations of core (required) and extension(optional) member patterns
-
It helps define policies for: acquiring data from internal and external sources, which fields need to be anonymized/removed/encrypted, what consitutes personally identifiable information, how processed data should be persisted, the publication of the analytics' results and how long the data should be stored
-
When a event data transfer engine is used, the ingested data can normally be filtered out in-flight via the removal of unwanted or corrupt data
-
The extend to which an enterprise can benefit from a Big Data Solution is limited when it is deployed in isolation from the rest of the traditional enterprise systems
Frage 14
Frage
Big Data technology artifacts
Antworten
-
Each pattern is asociated with one or more mechanism that represent common_______
-
visualization layer
-
Correct access levels need to be consistently configured across all required resources, such as a storage device or a processing engine
-
It is important to assess the interoperability and extensibility of each ________ so that upgrading a single ____________ does not impact the funcionality of other ______
Frage 15
Frage
Pattern-Mechanism Associations
Antworten
-
A mechanism may either be implemented when the pattern is applied, or it may be directly affected by the application of the pattern
-
Not all of ________ may be required to apply the pattern. Sometimes one or more associated mechanisms may act as an alternative to others for how the pattern is applied
-
For regular exports, the file data transfer engine can be configurated via a workflow engine to run at regular intervals
-
Correspondingly, an ____________ (apart from other specifications) generally includes the specifications of multiple component architectures
Frage 16
Frage
Pattern-Mechanism Associations
Antworten
-
The application of a pattern may not be limited to the use of its associated mechanisms. Other required components or artifacts are explained as part of the pattern descriptions
-
Note also that mechanisms are not associated with compound patterns
-
However, when performing deep analytics, such as in the case of predictive and prescriptive analytics, an analytics engine also exists
-
Due to its enterprise-wide focus, the _____________ can also be considered as a reference point for understanding what constitutes the enterprise infraestructure
Frage 17
Antworten
-
represent technology artifacts that can be combined to form Big Data architectures
-
A ________ provides the ability to compress and decompress data in big data platform
-
As a result, the corresponding architecture of the __________ also gets complicated
-
The ___________ provides an opportunity for developing an understanding of the analysis results in a graphical manner
Frage 18
Frage
Big Data Mechanisms
Antworten
-
Serialization Engine
-
Compression Engine
-
Visualization Engine
-
Relational Sink
Frage 19
Frage
Big Data Mechanisms
Antworten
-
Security Engine
-
Cluster Manager
-
Data Governance Manager
-
Productivity Portal
Frage 20
Antworten
-
is the process of transforming objects or data entities into bytes for persistence (in memory or on disk) or transportation from one machine to another over a network
-
In Big Data platforms, ________ is required for establishing communication between machines by exchanging messages between them, and for persisting data
-
To make data processing easier by not having to deal with the intricacies of processing engines, the ________ pattern can be applied, which uses a query engine to abstract away the underlying processing engine
-
The ________ bytes can either be encoded using a binary format or a plain-text format
Frage 21
Antworten
-
The opposite transformation process from bytes to objects or data entities is called _________
-
The _______ pattern is normally applied together with the Large-Scale Batch Processing pattern as part of a complete solution
-
The _________ pattern is primarily implemented by using an event data transfer engine that is built on a publish-subscribe model and further uses a queue to ensure availability and reliability
-
A documentation of the _________ further helps to ascertain which maturity level of the analytics the enterprise is currently at
Frage 22
Frage
Serialization Engine
Antworten
-
The ________ provides the ability to serialize and deserialize data in a Big Data platform
-
Different ____________ may provide different levels of speed, extensibility and interoperability
-
For example, a data processing job needs to be partitioned first to act on sub-groups of data. After processing, the results from each partition need to be consolidated together
-
A file data transfer engine is generally used to implement this design pattern that can further be encapsulated via the productivity portal
Frage 23
Frage
Serialization Engine
Antworten
-
Ideally, a __________ should serialize/deserialize data at a fast speed, be amenable to future changes and work with a variety of data producers and consumers
-
These goals are achieved in part by serializing and deserializing data into and out of non-proprietary formats, such as XML, JSON and BSON
-
Generally, these multiple processing runs are connected together using the provided functionality within the processing engine or through further application of the Automated Dataset Execution pattern
-
The ___________ pattern is associated with the data transfer engine (relational), storage device, workflow engine and productivity portal mechanisms
Frage 24
Antworten
-
is the process of compacting data in order to reduce its size, whereas decompression is the process of uncompacting data in order to bring the data back to its original size
-
The _________ compound pattern represents a part of a Big Data solution environment capable of storing high-volume and high-variety data and make-it available for indexed, ___________
-
The ______ consists of the storage device(s) that store the acquired data and generally consist of a distributed file system and a least one NoSQL database
-
A ____________ represents the design of a single software program, representing a module, is structured within a modular/distributed software environment
Frage 25
Antworten
-
A ________ provides the ability to compress and decompress data in big data platform
-
In Big Data environments, there is a requirement to acquire and store as much data as possible in order to derive the largest potential value from analysis.
-
governance layer
-
Intermediate Results Storage (optional)
Frage 26
Antworten
-
However, if data is stored in an uncompressed form, the available storage space may not be effienciently utilized
-
As a result, data compression can be used to effectively increase the storage capacity of disk/memory space. In turn, this helps to reduce storage cost
-
Due to the aforementioned reasons, a Big Data solution is generally integrated with the rest of the enterprise IT systems in order to provide maximum value
-
By using a point and click interface to work with the Big Data platform, the _______ makes it easier and quicker to populate the Big Data platform with the required data, manage and process that data and export the processed results
Frage 27
Antworten
-
A Big Data platform is a combination of multiple resources, which are provided by different mechanism, each with its own unique security configuration, This requires developing and maintaining separate security policies and API-based integration
-
In a clustered environment, this can become cumbersome and difficult to maintain, especially where data is accessed across the enterprise with varying levels of authorization
-
is the process of transforming objects or data entities into bytes for persistence (in memory or on disk) or transportation from one machine to another over a network
-
However, a __________ goes beyond data curation and includes data processing, analysis and visualization technologies as well
Frage 28
Antworten
-
Correct access levels need to be consistently configured across all required resources, such as a storage device or a processing engine
-
Instead of configuring access for each resource individually, a ___________ can be used
-
A ______ acts as a single point of contact for securing a Big Data platform, providing authentication, authorization and access auditing features
-
It acts as a perimeter guard for the cluster with centralized security policy declaration and management, enabling role-based security
Frage 29
Antworten
-
For enhanced data access security, a ________ may provide fine-grained control over how data is accessed from a range of storage devices and assist with addressing regulatory compliance concerns
-
A _______ may also integrate with enterprise identity and access management (IAM) systems to enable single sign-on (SSO)
-
Futhermore, _________ provide data confidentiality by enabling data encryption for at-rest (data stored in a storage device) and in-motion (data in transit over a network) data
-
It helps define policies for: acquiring data from internal and external sources, which fields need to be anonymized/removed/encrypted, what consitutes personally identifiable information, how processed data should be persisted, the publication of the analytics' results and how long the data should be stored
Frage 30
Antworten
-
A Big Data platform generally exists as a cluster-based environment ranging from a few to a large number of nodes
-
Due to the multi-node nature of such an environment, the provisioning, configuration,day-to-day, management and health monitoring of a cluster can be a daunting task
-
Cloud-based Big Data Storage (optional)
-
High Volume Binary Storage
Frage 31
Antworten
-
A ________ provides centralized management of a cluster, enabling streamlined deployment of core services over the cluster and their subsequent monitoring
-
In the context of a Big Data platform, a service, such as MapReduce (a processing engine) or HDFS (a distributed file system), refers to a background process that executes a Big Data mechanism
-
Instead of individually installing, managing and monitoring services on each node, a ________ provides a dashboard from where these task can be centrally performed via simple mouse click instead of the authoring and running of command line scripts
-
A _______ provides a centralized view to monitor cluster health, services status and resource utilization. It also provides for the configuration of various node-level and cluster-level alerts
Frage 32
Antworten
-
the ______ supports the deployment of new services and the addition of nodes to a cluster
-
the _______ further help reduce cluster administration overhead and makes diagnosis more efficient
-
Note that the ____________ pattern also covers the acquisition of semi-structured data, such as XML or JSON-formatted data
-
Based on the type and location of the data sources, this layer may consist of more than one data transfer engine mechanism
Frage 33
Antworten
-
For instance, the _______ makes it quicker and easier to find out why a particular service responsible for a specific storage device is not running
-
The net effect is streamlined _____________ that minimizes cluster downtime and enables reliable and timely Big Data analysis
-
Futhermore, the _________ can integrate with other infraestructure management tools to provide a unified view and the ability to perform performance tuning
-
This pattern is associated with the storage device (column-family) and serialization engine mechanisms
Frage 34
Frage
Data Governance Manager
Antworten
-
controls the management of the data lifecycle to ensure that quality data is available in a controlled, secure and timely fashion
-
helps ensure regulatory compliance, risk management and the establishment of data lineage
-
The _________ provides an understanding of the range of datasets used by different Big Data solutions
-
One of the main differentiating characteristics of Big Data environments when compared with traditional data processing environments is the sheer amount of data that needs to be processed
Frage 35
Frage
Data Governance Manager
Antworten
-
in a Big Data environment, the variety characteristic coupled with unknown access scenarios can make __________ a challenging task
-
A _________ is a tool with features for performing a range of common ____________ task in a centralized manner
-
A _________ can provide information on where the dataset resides, who the data owner/steward is, what the format of the data is, when the dataset was acquired, the source of the dataset, expiry date(if any), schema information via metadata search, a lineage viewer for establishing provenance
-
A _________ supports data lifecycle management through: the authoring of data retention and eviction policies, the establishment of security policies that specify the condition under which encryption is applied to a dataset or specific fields of a dataset, the creation of policies that establish disaster recovery management procedures
Frage 36
Frage
Data Governance Manager
Antworten
-
Futhermore, a ___________ can provide information on the level of trust and sensitivity of data
-
This information includes whether or not the data can be stored in a cloud environment, as well as any geographical limitations for data persistence
-
To ensure enhanced data confidentiality and privacy within a cluster, an advanced _________ may further enable fine-grained control over data storage by specifying which nodes can store which types of datasets
-
Automatic Data Replication and Reconstruction (core)
Frage 37
Frage
Visualization Engine
Antworten
-
To make sense of large amounts of data and to perform exploratory data analysis in support of finding meaningful insights, it is important to correctly interpret the results obtained from data analysis
-
This interpretation is dependent upon the Big Data platform's ability to present data in visual form
-
In the context of a Big Data platform, a service, such as MapReduce (a processing engine) or HDFS (a distributed file system), refers to a background process that executes a Big Data mechanism
-
A Big Data platform is a combination of multiple resources, which are provided by different mechanism, each with its own unique security configuration, This requires developing and maintaining separate security policies and API-based integration
Frage 38
Frage
Visualization Engine
Antworten
-
A ________ graphically plots large amounts of data using traditional visualization techniques, including the bar chart, line graph and pie chart, alongside contemporary, Big Data-friendly visualization techniques, such as heat maps, word clouds, maps and spark charts
-
Additionally, a __________ may allow the creation of dashboards with filtering, aggregation, drill-down and what-if analysis features, along with the exportation of data for specific views
-
A ________ greatly enhances the productivity of data scientists and business analysts
-
A ________ provides a foundation for creating self-service visualizations for business intelligence (BI) and analytics
Frage 39
Frage
Productivity Portal
Antworten
-
A Big Data platform provides a range of features, including data import, storage, processing and analysis, as well as workflow creation, via various mechanisms
-
Interacting with each of these mechanisms using their default interfaces can be difficult and time-consuming due to the mechanisms non-uniform natures. Further tools may need to be installed in order to make this interaction easier and to make sense of the processing results
-
the term ___________ refers to a technology environment comprised of Big Data mechanisms and technology artifacts that serves as a platform for developing Big Data solutions
-
Multiple __________ are generally deployed in an enterprise for fulfilling different business requirements
Frage 40
Frage
Productivity Portal
Antworten
-
As a result, it takes longer to get from data import to data visualization, which further impacts the productivity and the overall value attributed to Big Data exploration and insight discovery (the value characteristic)
-
A ________ provides a centralized graphical user interface (GUI) for performing key activities that are a part of working with Big Data, including importing and exporting data, manipulating data storage, running data processing jobs, creating and running workflows, querying data, viewing schemas and performing searches
-
By understanding the makeup of each _______ the corresponding data ingestion requirements are better understood
-
The ________ pattern can be applied to automatically export data from the Big Data platform as a delimited or a hierarchical file
Frage 41
Frage
Productivity Portal
Antworten
-
A ________ establishes a unified interface for configuring and managing the underlying mechanisms of the Big Data solution environment, such as establishing settings for the security engine
-
Additionally, a __________ may encapsulate a visualization engine in order to provide more meaningful, graphical views of data
-
By using a point and click interface to work with the Big Data platform, the _______ makes it easier and quicker to populate the Big Data platform with the required data, manage and process that data and export the processed results
-
Big data ________ are (partially or entirely) applied by implementing different combinations of Big Data Mechanism
Frage 42
Frage
Shared-Everything Architecture
Antworten
-
is a machine-level architecture where multiple processors (CPUs) share memory and disk storage
-
can be implemented in two different ways: simetric multiprocessing and distributed shared memory
-
This is due to the fact that the same _________ can be physically implemented in a number of ways. The physical implementation is dependent upon the underlying technology that is used
-
A ________ represents the high-level components of a system, their functionality and how they are connected with one another
Frage 43
Frage
Shared-Everything Architecture
Antworten
-
It should be noted that SMP and DSM apply to a single machine
-
is suitable for transactional workloads where data being processed is small and can be stored on a single machine
-
As all the resources (processor, memory and disk) exist within the boundaries of a single machine, data exchange only occurs within those boundaries
-
Therefore, transactional data can be processed quickly without any latency using a simple programming framework
Frage 44
Frage
Shared-Everything Architecture
Antworten
-
Since all of the resources are tightly coupled together in a ___________, scalability becomes an issue
-
A storage area network (SAN) or a network-attached storage (NAS) solution can possibly be attached to a high-end multiprocessor machine to process large amounts of data
-
However, the network then becomes a bottleneck, with the data transfer taking longer than the actual data processing because the data needs to be transferred across the network
-
A ________ augmented with an SAN, which geatly increases storage capacity. However, the data now needs to be transferred across the network for processing, which adds to the data processing latency
Frage 45
Frage
Shared-Everything Architecture
Antworten
-
With a ___________ , in order to cope with greater resource demands for CPU and/or disk space, the only option to scale up by replacing existing machines with higher-end (expensive) machines. Scaling up allows more processing and offers greater storage
-
However, any type of architecture that relies upon vertical scaling has an upper limit due to technology constraints such as maximum number of processors or memory limitations
-
Once the limit is reached, the only option is to scale out. Scaling out is a Big Data processing requirement that is not supported by _________
-
In the case that stored value conforms to a structure, such as a log file, the field names, along with field type information, are also recorded
Frage 46
Frage
Simetric Multiprocessing (SMP)
Antworten
-
memory is pooled and shared between all processors. ______ is also known as uniform memory access (UMA)
-
In most cases, the ingested data is first stored on the distributed file system in a compressed form (apart from removal of unwanted and corrupt data)
-
the adquisition of new hardware resources
-
Although expensive, _________ databases provide atomicity, consistency, isolation and durability (ACID) compliance while supporting the querying of data using Structured Query Language (SQL)
Frage 47
Frage
Distributed Shared Memory (DSM)
Antworten
-
multiple pools of memory exist. Thus memory is not shared between processors. ____ is also know as non-uniform memory access (NUMA)
-
As a result, data compression can be used to effectively increase the storage capacity of disk/memory space. In turn, this helps to reduce storage cost
-
Each pattern is asociated with one or more mechanism that represent common_______
-
For ________, apart form documenting the attributes and types of each entity, the possible connections (the edges) between entities are also recorded
Frage 48
Frage
Shared-Nothing Architecture
Antworten
-
A ________ is a type of distributed architecture that consists of fully independent machines. The machines each have their own processors, memory, disk and operating system and are networked together as a single system
-
The ________ is self-sufficient and is free of any shared resources. For this reason, it is a highly scalable architecture that provides scale-out support, meaning extra machines can be added as required.
-
To ensure enhanced data confidentiality and privacy within a cluster, an advanced _________ may further enable fine-grained control over data storage by specifying which nodes can store which types of datasets
-
A ____________ is somewhat similar to a component architecture. However, in practice a single mechanism may itself be comprised of more that one component
Frage 49
Frage
Shared-Nothing Architecture
Antworten
-
Although highly scalable, this architecture approach requires the use of complex distributed programming frameworks
-
For example, a data processing job needs to be partitioned first to act on sub-groups of data. After processing, the results from each partition need to be consolidated together
-
This design pattern is generally applied together with the Stream Access Storage pattern
-
The ______ consists of the storage device(s) that store the acquired data and generally consist of a distributed file system and a least one NoSQL database
Frage 50
Frage
Shared-Nothing Architecture
Antworten
-
Usual data processing techniques employed within a ___________ include data sharding and replication where large datasets are divided and replicated across multiple machines
-
This work well for Big Data where a single dataset may be divided across several machines due to its volume
-
In this way, with Big Data processing, data and processing resources can be co-located, thereby reducing data transfer frecuency and volume
-
the __________ achieves this functionality via the data _________ manager
Frage 51
Frage
Masive parallel processing
Antworten
-
is an architecture that can be applied to distributed query processing in a shared-nothing architecture
-
is mainly employed by high-end databases and database appliances like IBM Netezza and Teradata
-
An __________ describes the design used to integrate two or more applications, and further encompasses the technology architectures of the integrated applications.
-
the architecture of a single Big Data mechanism
Frage 52
Frage
Masive parallel processing
Antworten
-
Databases based on ______ architecture generally use high-end hardware and a proprietary interconnect to link machines in order to enable the throughput required for high-speed analytics
-
Although expensive, _________ databases provide atomicity, consistency, isolation and durability (ACID) compliance while supporting the querying of data using Structured Query Language (SQL)
-
By understanding the makeup of each _______ the corresponding data ingestion requirements are better understood
-
Alternative, the member patterns that comprise a __________ can represent a set of related features provided by a particular environment. In this case, a coexistent application of patterns establishes a "solution environment" that may be realized by a combination of tools and technologies
Frage 53
Frage
Masive parallel processing
Antworten
-
databases generally require data to exist in a structured format at the time of loading the data into the database. In other words, a schema needs to exist
-
This prior knowledge about the structure of the data makes ___________ databases very fast at querying large datasets
-
The structured format requirement introduces the need for an extra ETL step to be performed before unstructured data can be loaded into the __________ database
-
It is advisable to develop an inventory of all _________ to avoid duplication of datasets. This also helps with identifying relevant datasets when performing exploratory analysis
Frage 54
Antworten
-
Similar to MPP, __________ (a batch processing engine) is a distributed data processing framework that requires a shared-nothing architecture
-
makes use of commodity hardware where machines are generally networked using Local Area Network (LAN) technology
-
the _______ further help reduce cluster administration overhead and makes diagnosis more efficient
-
A productivity portal normally encapsulates a relational data transfer engine for point-and-click import
Frage 55
Antworten
-
___________ - based processing platforms, such a Hadoop, do not require knowledge of data structure at load time.
-
Therefore ______ is ideal for processing semi-structured and unstructured data in support of executing analytical queries
-
However, without any knowledge of the data structure, data processing is slower with ________ as compered to MPP due to the inability to optimize query execution
-
Both MPP databases and __________ make use of the shared-nothing architecture and are based on the divide-and-conquer principle
Frage 56
Antworten
-
Both MPP systems and __________ can be used for Big Data processing. However, from scalability point of view, MPP systems, when compare with ________, provide limited support for scaling out, as they are generally appliance-based
-
MPP systems are also a costlier option than ___________, which leverages inexpensive commodity hardware
-
_________, a framework for processing data, requires interaction via a general purpose programming language, such as java
-
A ________ provides raw storage where the value (the stored data) can be of any type, such as a file or a image, and is accessible via a key
Frage 57
Frage
Technology Infraestructure
Antworten
-
In the context of software systems, a _________ represents the underlying environment that enables the design and execution of a software system
-
A ____________ defines the overall processing and storage capabilities of an IT enterprise. As well, a ____________ set the constraints within which the technology architecture needs to be designed
-
Efficient processing of large amounts of data demands an offline processing strategy, as dictated by the ___________ design pattern
-
As a result, the corresponding architecture of the __________ also gets complicated
Frage 58
Frage
Technology Architecture
Antworten
-
In the context of software systems, a _________ represents the fundamental design of a software system
-
A __________ can be defined for varying levels of software artifacts ranging from a single software library to the set of sotfware systems across the entire IT enterprise
-
With a reasonable amount of data acquisition, IT spending only increases slightly with the passage of time. As the amount of acquired data increases exponentially, there is a tendency for IT spending to increase exponentially as well
-
In a Big Data solution environment, quite often data needs to be imported from relational databases into the Big Data platform for various data analysis tasks
Frage 59
Frage
Traditional Architecture Types
Antworten
-
component architecture
-
application architecture
-
Poly Source
-
storage layer
Frage 60
Frage
Traditional Architecture Types
Antworten
-
integration architecture
-
enterprise technology architecture
-
High Volume Binary Storage
-
High Volume Hierarchical Storage
Frage 61
Frage
component architecture
Antworten
-
A ____________ represents the design of a single software program, representing a module, is structured within a modular/distributed software environment
-
Feature-wise, a module is inherently different from a software program, as the former only provides a specific set of functionality for performing a subset of operations when compared against the complete software program
-
The modules are dependent on other modules for providing the full set of functionality and hence are designed to be composable
-
Although expensive, _________ databases provide atomicity, consistency, isolation and durability (ACID) compliance while supporting the querying of data using Structured Query Language (SQL)
Frage 62
Frage
application architecture
Antworten
-
An __________ represents the design and structure of a complete software system that can be deployed on its own
-
In a modular/distributed software environment, an _____________ generally consists of a number of modules and some storage
-
Correspondingly, an ____________ (apart from other specifications) generally includes the specifications of multiple component architectures
-
is the process of transforming objects or data entities into bytes for persistence (in memory or on disk) or transportation from one machine to another over a network
Frage 63
Frage
integration architecture
Antworten
-
An __________ describes the design used to integrate two or more applications, and further encompasses the technology architectures of the integrated applications.
-
This generally involves connectors, middleware and any custom developed components
-
A documented _____________ provides a point of reference for ensuring continued integration in the face of changes to the integrated applications' architecture
-
Futhermore, the capabilities of this layer indicate the kinds of a Big Data solutions that can be built
Frage 64
Frage
enterprise technology
Antworten
-
An _________ architecture represents an ____________ landscape including its respective architectures
-
In the case of continuosly arriving data, data is first accumulated to create a batch of data and only then processed
-
In Big Data environment, large volume not only refers to tall datasets (a large number of rows) but also to wide datasets (a large number of columns)
-
Futhermore, the capabilities of this layer indicate the kinds of a Big Data solutions that can be built
Frage 65
Frage
enterprise technology architecture
Antworten
-
In contrast to the other three technology architectures, which can be documented before their development the ______________ can is generally documented once other architectures are in place
-
The scope of the _____________ encompasses component, application and integration architectures
-
Due to its enterprise-wide focus, the _____________ can also be considered as a reference point for understanding what constitutes the enterprise infraestructure
-
Therefore ______ is ideal for processing semi-structured and unstructured data in support of executing analytical queries
Frage 66
Frage
Big Data Mechanisms Architecture
Antworten
-
the architecture of a single Big Data mechanism
-
The ___________ refers to the technology architecture of an individual ___________ that provides a specific functionality, such as a data transfer engine or a query engine
-
As a result, it takes longer to get from data import to data visualization, which further impacts the productivity and the overall value attributed to Big Data exploration and insight discovery (the value characteristic)
-
However, if internal data is also integrated, the same solution will provide business-specific results
Frage 67
Frage
Big Data Mechanisms Architecture
Antworten
-
A ____________ is somewhat similar to a component architecture. However, in practice a single mechanism may itself be comprised of more that one component
-
Unlike traditional component architecture, the architectures of many ___________ are available due to the many active, open source projects that create and sustain them
-
A _____________ is generally a complete software package that can exist on its own but only truly realices its full potential when it is combined with other __________
-
For example, a storage device and a processing engine can exist on their own. However, the real value of these two is accomplished when the processing engine retrieves data from the storage device and processes the data to obtain meaningful results
Frage 68
Frage
Big Data Mechanisms Architecture
Antworten
-
It is important to assess the interoperability and extensibility of each ________ so that upgrading a single ____________ does not impact the funcionality of other ______
-
For example, a resource manager should be compatible with different types of processing engines (batch and realtime) and should provide extension points for integrating future, processing-specific processing engines
-
Similarly, when upgraded, the resource manager should provide backward compatibility with older processing engines to ensure disruption-free operation
-
A ________ provides centralized management of a cluster, enabling streamlined deployment of core services over the cluster and their subsequent monitoring
Frage 69
Frage
Big Data Solution Architecture
Antworten
-
The architecture of a set of Big Data Mechanisms assembled into a solution
-
The ___________ represents a solution environment built to address a specific Big Data problem, such as realtime sensor data analysis or a recommendation system
-
The __________ is associated with the storage device mechanism
-
An _________ architecture represents an ____________ landscape including its respective architectures
Frage 70
Frage
Big Data Solution Architecture
Antworten
-
Such a solution environment represents a set of multiple Big Data mechanism that collectively provide the required business functionality
-
In a Big Data environment, the term __________ is similar to the term application architecture. However, in the domain of Big Data, it is the collective application of Big Data mechanisms that results in the creation of a __________. This is different that the concept of a traditional, package software application
-
A ___________ is generally a Big Data pipeline comprising multiple stages where complex processing is broken down into modular steps called tasks
-
Each task in a Big Data pipeline can make use of a processing engine mechanism, such as MapReduce or Spark, or a query engine mechanism, such as Hive or Pig to perform operations on Data
Frage 71
Frage
Big Data Solution Architecture
Antworten
-
Complex ___________ may involve more than one data pipeline, for example, one for realtime data processing and the other for batch data processing
-
Multiple __________ are generally deployed in an enterprise for fulfilling different business requirements
-
A documentation of the corresponding architectures provides an understanding of the utilization levels of common Big Data mechanism
-
This helps with establishing the scalability requirements of each Big Data mechanism and determining any potential performance bottlenecks
Frage 72
Frage
Big Data Integration Architecture
Antworten
-
The architecture that consists of integrating a Big Data solution with the traditional enterprise systems
-
The extend to which an enterprise can benefit from a Big Data Solution is limited when it is deployed in isolation from the rest of the traditional enterprise systems
-
The ________ pattern is associated with the query engine, processing engine, storage device, resource manager and coordination engine mechanisms
-
With a ___________ architecture, in order to cope with greater resource demands for CPU and/or disk space, the only option to scale up by replacing existing machines with higher-end (expensive) machines. Scaling up allows more processing and offers greater storage
Frage 73
Frage
Big Data Integration Architecture
Antworten
-
A Big Data solution that is integrated with other parts of an enterprise ecosystem provides maximum value because the data it contains or the analytics results it generates can be usedby other traditional enterprise systems, such as the enterprise data warehouse or an ERP system
-
Similarly, a Big Data Solution that only makes use of external datasets may produce generic or out-of-context results that are of little value to the business
-
However, if internal data is also integrated, the same solution will provide business-specific results
-
Due to the aforementioned reasons, a Big Data solution is generally integrated with the rest of the enterprise IT systems in order to provide maximum value
Frage 74
Frage
Big Data Integration Architecture
Antworten
-
The resulting architecture is known as the __________, which includes the architecture of the Big Data solution, any connected enterprise systems and integration components
-
With respect to a Big Data solution, there are generally two integration points: one for importing the raw data that needs to be processed and the other for exporting the results or, in some cases, exporting the ingested cleansed data
-
One prominent area within the field of pattern identification is the analysis of connected entities. Due to the large volume of data in Big Data environments, efficient and timely analysis of such data requires specialized storage
-
Big Data solutions demand opposing access requirements when it comes to raw versus processed data
Frage 75
Frage
Big Data Integration Architecture
Antworten
-
For this, multiple data transfer engines or connectors, such as ODBC, are employed
-
Instead of using multiple point-to-point connections (connectors) between each Big Data solution and the traditional system, a single data bus can be used that provides a standardized integration approach across multiple Big Data solutions
-
The _________ compound pattern represents a part of a Big Data solution environment capable of storing high-volume and high-variety data and make-it available for indexed, ___________
-
Although highly scalable, this architecture approach requires the use of complex distributed programming frameworks
Frage 76
Frage
Big Data Platform Architecture
Antworten
-
The architecture of the entire ____________ that enables the execution of multiple Big Data solutions
-
The __________ is the underlying technology architecture that supports the execution of multiple Big Data solutions
-
Furthermore, the storage device automatically detects when a replica becomes unavailable and recreates the lost replica from one of the available replicas
-
can be implemented in two different ways: simetric multiprocessing and distributed shared memory
Frage 77
Frage
Big Data Platform Architecture
Antworten
-
This type of architecture documents the underlying Big Data mechanism that have been assembled in different combinations to construct multiple Big Data solutions
-
This generally represents a layered architecture where each top layer makes use of the successive bottom layer
-
The typical makeup of a __________ includes the storage layer, processing layer, analysis layer and visualization layer
-
At the start of a Big Data initiative, the _________ may only consist of a rudimentary set of Big Data mechanism for supporting a simple Big Data Solution
Frage 78
Frage
Big Data Platform Architecture
Antworten
-
However, with the passage of time, as more Big Data solutions are built and their complexity increases, additional Big Data mechanisms are introduced
-
As a result, the corresponding architecture of the __________ also gets complicated
-
A documentation of the _________ further helps to ascertain which maturity level of the analytics the enterprise is currently at
-
For example, the existence of an analytics engine in addition to a query engine indicates that the enterprise employs some level of predictive analytics
Frage 79
Frage
Big Data Platform Architecture
Antworten
-
An enterprise generally starts off or is already at the descriptive or diagnostic analytics maturity level and aims to move towards the predictive or predictive analytics maturity level
-
A _________ can also be considered a superset of the traditional data architecture, as the former includes the development of data architecture for both raw and processed data
-
Dataset Location: the location from where the data will be available, which can be internal or external to the enterprise, including the cloud
-
Furthermore, the storage device automatically detects when a replica becomes unavailable and recreates the lost replica from one of the available replicas
Frage 80
Frage
Big Data Platform Architecture
Antworten
-
It further includes decisions based around which storage technologies to use, what should be the format and structure of processed data and dictionary of datasets available for developing Big Data solutions
-
However, a __________ goes beyond data curation and includes data processing, analysis and visualization technologies as well
-
The _______ pattern is normally applied together with the Large-Scale Batch Processing pattern as part of a complete solution
-
For _________, the description and type of the entity being stored is documented, such as product image and png
Frage 81
Frage
Logical Architecture
Antworten
-
A ________ represents the high-level components of a system, their functionality and how they are connected with one another
-
The term logical emphasizes the fact that the description of the architecture does not bear any resemblance to the physical implementation of the system
-
This is due to the fact that the same _________ can be physically implemented in a number of ways. The physical implementation is dependent upon the underlying technology that is used
-
For health monitoring purposes, the cluster manager gathers metrics from various components running within different layers, such as the storage , processing and analysis layers, and displays their current status using a dashboard
Frage 82
Frage
Big Data Analytics Logical Architecture
Antworten
-
A __________ defines the logical components required for the implementation of a Big Data analytics solution
-
It is a specialized form of a Big Data platform architecture that defines the Big Data mechanisms at each of its different layers, the responsibility of each of these layers and the generic flow of data between these layers
-
Data is ingested generally via file and/or relational data transfer engines, saved to the disk-based storage device and then processed using a ___________ engine
-
Dataset Decomposition
Frage 83
Frage
Big Data Analytics Logical Architecture
Frage 84
Frage
Big Data Analytics Logical Architecture
Antworten
-
processing layer
-
analysis layer
-
visualization layer
-
utilization layer
Frage 85
Frage
Big Data Analytics Logical Architecture
Antworten
-
management layer
-
security layer
-
governance layer
-
Productivity Portal
Frage 86
Antworten
-
The ____ comprises all the _______ that had been identified during the Data Identification stage of the Big Data analysis lifecycle
-
However, rather than consisting of ________ for a single big Data solution, the _________ encompasses ___________ across all Big Data Solutions
-
Dataset Type: the underlying format of the data produced by the source (structured, unstructured or semi-structured)
-
As all the resources (processor, memory and disk) exist within the boundaries of a single machine, data exchange only occurs within those boundaries
Frage 87
Antworten
-
It is important to understand that this layer does not form part of the physical Big Data analytics architecture, as the data is produced by a source, such as an API, a database or web location, that is part of a separate system
-
The _________ provides an understanding of the range of datasets used by different Big Data solutions
-
It is advisable to develop an inventory of all _________ to avoid duplication of datasets. This also helps with identifying relevant datasets when performing exploratory analysis
-
By understanding the makeup of each _______ the corresponding data ingestion requirements are better understood
Frage 88
Antworten
-
Processing Big Data datasets involves the use of processing engines that need programmatic skills in order to work with them
-
This prior knowledge about the structure of the data makes ___________ databases very fast at querying large datasets
-
Access Type: has the data open or restricted access?
-
Access Method: is data available via a simple connection or does it need scraping from a web resource?
Frage 89
Antworten
-
Access Cost: is the data available freely or is there a cost associated with this acquisition, such as from a data market?
-
Data Production Speed: the rate at which the data source generates the data
-
Dataset Location: the location from where the data will be available, which can be internal or external to the enterprise, including the cloud
-
By using a point and click interface to work with the Big Data platform, the _______ makes it easier and quicker to populate the Big Data platform with the required data, manage and process that data and export the processed results
Frage 90
Frage
data acquisition layer
Antworten
-
The _______ provides functionality for acquiring data from the sources in the data sources layer
-
Based on the type and location of the data sources, this layer may consist of more than one data transfer engine mechanism
-
In the context of software systems, a _________ represents the fundamental design of a software system
-
However, a __________ goes beyond data curation and includes data processing, analysis and visualization technologies as well
Frage 91
Frage
data acquisition layer
Antworten
-
For internal structured data sources, a relational data transfer engine can be used
-
For semi-structured and unstructured data sources, whether internal or external, an event or file data transfer engine can be used
-
In the case of realtime processing of data or stream analysis, an event data transfer engine is generally used
-
When a event data transfer engine is used, the ingested data can normally be filtered out in-flight via the removal of unwanted or corrupt data
Frage 92
Frage
data acquisition layer
Antworten
-
Similarly, in the case of a relational data transfer engine, corrupt or wanted data can be filtered out at the source by the specification of a constrained selection criteria
-
However, in the case of a file transfer engine, the file needs to be ingested before it can be examinated for the filtration process
-
The ________ provides an easy-to-interact interface for analyzing data in the storage layer and consists of the query and analytics engines
-
Although expensive, _________ databases provide atomicity, consistency, isolation and durability (ACID) compliance while supporting the querying of data using Structured Query Language (SQL)
Frage 93
Frage
data acquisition layer
Antworten
-
this layer also includes mechanism for automatically appending metadata to the ingested data for assuring quality and maintaining provenance and compression of data
-
The data adquisition and Filtering stage of the Big Data analysis lifecycle is supported by the ________
-
Under certain circumstances, acquiring data may require API integration, which further warrants the development of custom (code) libraries or service development, which reside in this layer
-
the distributed file system automatically splits a large dataset into multiple smaller datasets that are then spread across the cluster
Frage 94
Antworten
-
The ______ consists of the storage device(s) that store the acquired data and generally consist of a distributed file system and a least one NoSQL database
-
Note that in the case of realtime data processing, the ________ also consist of in -memory storage technologies that enable fast analysis of high velocity data as it arrives
-
Furthermore, in a production environment, the complete cycle needs to be repeated over and over again
-
Therefore ______ is ideal for processing semi-structured and unstructured data in support of executing analytical queries
Frage 95
Antworten
-
In most cases, the ingested data is first stored on the distributed file system in a compressed form (apart from removal of unwanted and corrupt data)
-
This is because a distributed file system provides the most inexpensive form of storing large volumes of data
-
From the distributed file system, data can be pre-processed and put into a more structured form using an appropiate NoSQL storage device
-
A structured (but not necessarily relational) form is required because the exploratory analysis of data and the derivation and application of statistical and machine learning models require data whose attributes can be accessed in a standardized manner
Frage 96
Antworten
-
Although the conversion to a structured form may not seem obvious in the case of applying semantic analysis techniques, even techniques such as text analytics first convert a document into a structured form before performing clustering, classification or a searching
-
Data that has undergone transformation, validation and cleasing operations is generally stored in one of the NoSQL databases, namely key-value, column-family, document and graph NoSQL databases
-
Big data ________ are (partially or entirely) applied by implementing different combinations of Big Data Mechanism
-
This interpretation is dependent upon the Big Data platform's ability to present data in visual form
Frage 97
Antworten
-
A ________ provides raw storage where the value (the stored data) can be of any type, such as a file or a image, and is accessible via a key
-
For _________, the description and type of the entity being stored is documented, such as product image and png
-
The modules are dependent on other modules for providing the full set of functionality and hence are designed to be composable
-
the _______ pattern is associated with the storage device (distributed file system) and processing engine (batch) mechanisms.
Frage 98
Antworten
-
A ________ is capable of storing each record in a hierarchical form that can be accessed via a key, imitating a physical document that can have multiple selections
-
For ________, the hierarchical structure of the different documents being stored, along with their types, is documented
-
Data Production Speed: the rate at which the data source generates the data
-
In most cases, the ingested data is first stored on the distributed file system in a compressed form (apart from removal of unwanted and corrupt data)
Frage 99
Frage
column-family database
Antworten
-
A ________ is like a relational database that stores data in rows and columns. However, rather than storing a value per column, multiple key-value pairs can be stored inside a single column
-
For ________, the field names of each entity and any sub-fields within each field, along with their data types, are recorded. Also, based on the analysis requirements, it is important to decide between storing data as wide-rows or as tall-columns
-
The ___________ pattern is associated with the data transfer engine (relational), storage device, workflow engine and productivity portal mechanisms
-
In an actual implementation, the __________ consists of Open Source libraries, third-party business intelligence (BI) or analytics software
Frage 100
Antworten
-
A ________ stores data in the form of connected entities where each record is called a node or a vertex and the connection between the entities is called the edge, which can be one-way or two-way
-
For ________, apart form documenting the attributes and types of each entity, the possible connections (the edges) between entities are also recorded
-
the _______ pattern is associated with the storage device (distributed file system) and processing engine (batch) mechanisms.
-
The ________ pattern is associated with the query engine, processing engine, storage device, resource manager and coordination engine mechanisms
Frage 101
Antworten
-
Before data can be stored in a NoSQL database, a data modelling exercise is generally undertaken
-
However, unlike the relational data modeling activity where entity names, attributes and relationships are documented, the nature of the NoSQL data modelling activity is different due to its non-relational nature and further depends on the type of the NoSQL database
-
In a NoSQL database, the emphasis is more on the structure of the individual aggregate, which is a self-contained record that has no relationships with other records
-
In Big Data environment, large volume not only refers to tall datasets (a large number of rows) but also to wide datasets (a large number of columns)
Frage 102
Antworten
-
In the case that stored value conforms to a structure, such as a log file, the field names, along with field type information, are also recorded
-
Apart from different storage devices, this layer also houses serialization and compression engines for storing data in an appropriate format and reducing storage footprint, respectively
-
This generally represents a layered architecture where each top layer makes use of the successive bottom layer
-
Not all of ________ may be required to apply the pattern. Sometimes one or more associated mechanisms may act as an alternative to others for how the pattern is applied
Frage 103
Antworten
-
The ________ provides a range of processing capabilities that play a pivotal role in generating value from a variety of voluminous data arriving at a high velocity in a meaningful time period
-
Apart from resource manager and coordination engines, although this layer can contain both the batch and the realtime processing engine mechanisms, based on the type of analytics performed, only one processing engine, such as the batch processing engine, may actually be present
-
The opposite transformation process from bytes to objects or data entities is called _________
-
controls the management of the data lifecycle to ensure that quality data is available in a controlled, secure and timely fashion
Frage 104
Antworten
-
Futhermore, the capabilities of this layer indicate the kinds of a Big Data solutions that can be built
-
Data Ingress/Egress - a data transfer engine may utilize a processing engine for transfering data
-
This pattern requires the use of a storage device implemented via a document NoSQL database servicing insert, select, update and delete operations. The document NoSQL database generally automatically encodes the data using a binary or a plain-text hierarchical format, such as JSON, before storage
-
However, unlike the relational data modeling activity where entity names, attributes and relationships are documented, the nature of the NoSQL data modelling activity is different due to its non-relational nature and further depends on the type of the NoSQL database
Frage 105
Antworten
-
Data Wrangling - data pre-processing activities, including data validation, cleansing and joining
-
Data Analysis - analytical activities, including querying, exploratory data analysis and model generation
-
This layer can further be divided into the batch and realtime ________
-
The ________ compound pattern represents a part of a Big Data solution environment capable of egressing high-volume, high-velocity and high-variety data out of the Big Data solution environment
Frage 106
Antworten
-
involves a _______ engine that processes large amounts of data stored on disk-based storage device in batches
-
This is the most common form of data processing employed in a Big Data environment for data wrangling operations, exploring data and developing and executing statistical and machine learning models
-
This layer represents the functionality as required by the Utilization of Analysis Results stage of the Big Data analysis lifecycle
-
the _______ further help reduce cluster administration overhead and makes diagnosis more efficient
Frage 107
Antworten
-
Due to its nature of processing, the processing results are not available instantaneously
-
Data is ingested generally via file and/or relational data transfer engines, saved to the disk-based storage device and then processed using a ___________ engine
-
Although the __________ is mainly concerned with policies that guide data management activities, it may further provide functionality for managing other aspects of the Big Data platform
-
Note that in the case of realtime data processing, the ________ also consist of in -memory storage technologies that enable fast analysis of high velocity data as it arrives
Frage 108
Frage
Realtime Processing
Antworten
-
involves a _______ engine that processes continuosly arriving data (streams) or data arriving at intervals (events) as it arrives
-
Instead of persisting the data to a disk-based storage device, _________ persists the data to a memory-based storage device
-
As a result, the corresponding architecture of the __________ also gets complicated
-
The architecture of a set of Big Data Mechanisms assembled into a solution
Frage 109
Frage
Realtime Processing
Antworten
-
Although providing instantaneous results, setting up such as capability is not only complex but also expensive due to the reliance on memory-based storage (memory is more expensive that disk)
-
Data is ingested via an event data transfer, saved to a memory-based storage device and then processed using a ___________ engine
-
Note that although in-memory storage is initially used, data is also saved to disk-based storage for deep analysis or future use
-
For providing maximun value, a __________ layer should provide low latency, high throughput, high availability and high fault tolerance
Frage 110
Frage
Realtime Processing
Antworten
-
Event Stream Processing (ESP)
-
Complex Event Processing (CEP)
-
During ________, multiple streams or events that generally originate form disparate sources and are spread out over different time intervals are analyzed simultaneously for finding correlations, patterns, anomalous behavior and error conditions
-
A __________ defines the logical components required for the implementation of a Big Data analytics solution
Frage 111
Frage
Realtime Processing
Antworten
-
generally refers to processing event-based data
-
However, the execution of data queries, which requires an instant response, on already persisted data acquired via batch import also falls in the domain of _________
-
Not all of ________ may be required to apply the pattern. Sometimes one or more associated mechanisms may act as an alternative to others for how the pattern is applied
-
The application of the __________ pattern further increases the reach of the Big Data soltution environment to non-IT users, such as data analysts and data scientists
Frage 112
Frage
Event Stream Processing
Antworten
-
During ________, the incoming stream of data or events, which generally originates from a single source and is ordered by time, is continuosly analyzed via the application of algorithms or query execution
-
In simple use cases, ____________ involves data cleasing, transformation and the generation of some statistics, such as sum, mean, min or max, which are then fed to an operational dashboard
-
A ________ is a type of distributed architecture that consists of fully independent machines. The machines each have their own processors, memory, disk and operating system and are networked together as a single system
-
To counter this issue, the dataset is horizontally broken into smaller parts as prescribed by the ___________ pattern
Frage 113
Frage
Event Stream Processing
Antworten
-
In complex use cases, statistical or machine learning algorithms with fast execution times can be executed to detect a pattern or an anomaly or to predict the future state
-
Other memory-resident datasets can also be incorporated for performing analytics that provide context-aware results
-
Although the processing results can be directly utilized (a dashboard or an application), they can act as a trigger for another application that performs a preconfigured action, such as making computational adjustments, or further analyses
-
In general, ___________ focuses more on speed than complexity. The operation needs to be executed in a comparatively simple manner in order to aid faster execution. Also, it is easier to set up than CEP but provides less value
Frage 114
Frage
Complex Event Processing
Antworten
-
During ________, multiple streams or events that generally originate form disparate sources and are spread out over different time intervals are analyzed simultaneously for finding correlations, patterns, anomalous behavior and error conditions
-
Like ESP, the objective is to aid in making realtime decisions either automatically or through human intervention the moment data is received
-
In an actual implementation, the __________ consists of Open Source libraries, third-party business intelligence (BI) or analytics software
-
Storing very large dataset where they are accessed by a number of users simultaneously can seriously affect the data access performance of the underlying database
Frage 115
Frage
Complex Event Processing
Antworten
-
When compared with ESP, __________ provides more value but is harder to set up, as it involves connecting with multiple data sources and executing complex logic
-
Complex correlation and pattern identification algorithms are applied, and business logic and KPIs are also taken into account for discovering cross-cutting ________ patterns
-
can be considered a superset of ESP. Oftentimes, both approaches can be deployed together such that the synthetic events generated as the output of ESP can become input for __________
-
provides rich analytics. However, due to its complex nature, time-to-insight may be adversely affected
Frage 116
Antworten
-
The ________ provides an easy-to-interact interface for analyzing data in the storage layer and consists of the query and analytics engines
-
Depending on the type of amalytics being performed, this layer may only consist of a query engine, such as in the case of descriptive and diagnostic analytics
-
Due to the contemporary nature of these processing engines and the specialized processing frameworks they follow, programmers may not be conversant with the APIs of each processing engine
-
With a reasonable amount of data acquisition, IT spending only increases slightly with the passage of time. As the amount of acquired data increases exponentially, there is a tendency for IT spending to increase exponentially as well
Frage 117
Antworten
-
However, when performing deep analytics, such as in the case of predictive and prescriptive analytics, an analytics engine also exists
-
This is the layer that converts large amounts of data into information that can be acted upon
-
This layer abstracts the processing layer with a view of making data analysis easier and further increasing the reach of the Big Data platform to data scientists and data analysts
-
Activities supported by this layer include: data cleasing, data mining, exploratory data analysis, preparing data for statistical/machine learning model development, model development, model evaluation and model execution
Frage 118
Antworten
-
The functionality provided by this layer corresponds to the data analysis stage of the Big Data analysis lifecycle
-
In an actual implementation, the __________ consists of Open Source libraries, third-party business intelligence (BI) or analytics software
-
A mechanism may either be implemented when the pattern is applied, or it may be directly affected by the application of the pattern
-
provides rich analytics. However, due to its complex nature, time-to-insight may be adversely affected
Frage 119
Antworten
-
In case of Open Source libraries, interaction is mostly command-line-based, with a basic graphical user interface (GUI) in some cases
-
However, third party software provides a GUI with point-and-click functionality for statistical/machine learning model development and other general data querying
-
The ________ compound pattern represents a part of a Big Data solution environment capable of egressing high-volume, high-velocity and high-variety data out of the Big Data solution environment
-
makes use of commodity hardware where machines are generally networked using Local Area Network (LAN) technology
Frage 120
Frage
visualization layer
Antworten
-
The __________ hosts the visualization engine and provides functionality as required by the Data Visualization stage of the Big Data analysis lifecycle
-
Note that the analysis layer may also encompass some level of data visualization features
-
This generally represents a layered architecture where each top layer makes use of the successive bottom layer
-
The ________ pattern is associated with the compresssion engine, storage device, data transfer engine and processing engine mechanisms
Frage 121
Frage
visualization layer
Antworten
-
However, the nature od such visualizations is different and more analysis-specific
-
Visualizations are generally utilized by data scientists and analysts to help them understand the data and the output of various analysis techniques
-
The visualization features provided by the __________ are mainly geared towards business users so that they can easily interpret the insights obtained from the data analysis exercise
-
However, based on the physical implementation, the third-party tools used at the analysis layer may provide the ability to create visualizations and publish them for enterprise-wide use so that different information workers and business users can turn the published information into knowledge for making informed decisions
Frage 122
Frage
visualization layer
Antworten
-
This layer is also fundamental to the concept of self service BI, where business users can access enterprise data directly without first requesting it from the IT team, can perform the required analyses and can create the required reports and dashboards themselves
-
To ensure longevity of the Big Data analytics platform, the compatibility of the visualization engine, with respect to the types of data sources it can connect to, needs to be assessed, as the analysis results are normally persisted to a storage device, such as a NoSQL database
-
The ___________ provides an opportunity for developing an understanding of the analysis results in a graphical manner
-
is the process of transforming objects or data entities into bytes for persistence (in memory or on disk) or transportation from one machine to another over a network
Frage 123
Antworten
-
However, if the maximum benefit is to be gained from these results, they need to be incorporated into the enterprise in some shape and form
-
The __________ provides the analysis results so that an enterprise can take advantage of an opportunity or mitigate a risk in a proactive manner
-
The query engine provides an easy-to-interact-with interface where the user specifies a script that is automatically converted to low-level API calls for the required processing engine
-
The resulting architecture is known as the __________, which includes the architecture of the Big Data solution, any connected enterprise systems and integration components
Frage 124
Antworten
-
This layer represents the functionality as required by the Utilization of Analysis Results stage of the Big Data analysis lifecycle
-
The functionality provided by the ____________ will vary based on the utilization pattern of the analysis results
-
This includes exporting results to dashboard and alerting applications (online portal), operational systems (CRM, SCM, ERP and e-commerce systems) and automated business processes (Business Process Execution Language-based processes)
-
At other times, data products are provided that enable the generation of computational results, such as outlier detection, recommendations, predictions or scores that can be used to optimize business operations
Frage 125
Antworten
-
In almost all cases, one or more data transfer engines are present that enable the export of the analysis results form the storage device(s) to downstream systems or applications
-
To automate the entire export process, a workflow engine that resides in the management layer is used in conbination with a data transfer engine
-
This enables automatic access to the analysis results stored in the disk-based or memory-based storage devices
-
Although the conversion to a structured form may not seem obvious in the case of applying semantic analysis techniques, even techniques such as text analytics first convert a document into a structured form before performing clustering, classification or a searching
Frage 126
Antworten
-
the _________ is tasked with the automated and continuous monitoring as well as maintenance of a Big Data platform for ensuring its operational integrity
-
The functionality supported by this layer relates to the operational requirements of a Big Data platform, including cluster setup, cluster expansion, system and software upgrades across the cluster and fault diagnosis and health monitoring of the cluster
-
A __________ defines the logical components required for the implementation of a Big Data analytics solution
-
the __________ provides functionality that ensures that the storage and access to data within the Big Data platform are managed throughout the lifespan of the data
Frage 127
Antworten
-
The ________ achieves the provisioning of the required functionality by hosting a cluster manager
-
For health monitoring purposes, the cluster manager gathers metrics from various components running within different layers, such as the storage , processing and analysis layers, and displays their current status using a dashboard
-
For cluster maintenance, such as adding a new node to the cluster, taking a node offline or installing a new service, and disaster ___________ task a graphical user interface (GUI) is used
-
Due to the requirement of integrating with multiple types of components, an inteorperable and extensible cluster manager should be chosen
Frage 128
Antworten
-
This ensures continuous monitoring and _________ of the Big Data platform, as new components are added across multiple layers in response to the ever-changing analytics requirements of an enterprise
-
Apart from operational ________, this layer also provides data processing and data __________ functionality through workflow engine and productivity portal mechanisms
-
The _______ pattern is normally applied together with the Large-Scale Batch Processing pattern as part of a complete solution
-
The ________ provides a range of processing capabilities that play a pivotal role in generating value from a variety of voluminous data arriving at a high velocity in a meaningful time period
Frage 129
Antworten
-
The __________ is responsible for securing various components operating within other layers of the Big Data platform
-
This layer provides functionality for authentication, authorization and confidentiality via the encryption of at-rest and in-motion data
-
Storage of unstructured data generally involves access scenarios where partial updates of data are not performed and specific data items (records) always accessed in their entirely, such as an image or user session data
-
Instead of persisting the data to a disk-based storage device, _________ persists the data to a memory-based storage device
Frage 130
Antworten
-
As ilustrated in the upcoming diagram, the ____________ houses the security engine. The security features provided by this layer are primarily used to secure the data acquisition layer, storage layer, processing layer and analysis layer
-
This layer provides functionality for authoring, applying and managing security policies as well as monitoring resource access via auditing
-
Databases based on ______ architecture generally use high-end hardware and a proprietary interconnect to link machines in order to enable the throughput required for high-speed analytics
-
A _________ is a tool with features for performing a range of common ___________ task in a centralized manner
Frage 131
Antworten
-
One of the main objectives of this layer is to ensure that only the intended user with the correct access level can access the requested resources, such as the storage device or the processing engine
-
The _________ intercepts access requests, which are made from enterprise application and systems using different security schemes, for resources within the Big Data platform
-
By acting as an intermediary, the __________ provides seamless access to the Big Data platform in a secure manner without the need for custom integration
-
A documentation of the _________ further helps to ascertain which maturity level of the analytics the enterprise is currently at
Frage 132
Antworten
-
the __________ provides functionality that ensures that the storage and access to data within the Big Data platform are managed throughout the lifespan of the data
-
the __________ achieves this functionality via the data _________ manager
-
The _________ compound pattern represents a fundamental solution evironment comprised of a processing ______ with data ingress, storage, processing and egress capabilities
-
A ________ stores data in the form of connected entities where each record is called a node or a vertex and the connection between the entities is called the edge, which can be one-way or two-way
Frage 133
Antworten
-
It helps define policies for: acquiring data from internal and external sources, which fields need to be anonymized/removed/encrypted, what consitutes personally identifiable information, how processed data should be persisted, the publication of the analytics' results and how long the data should be stored
-
Although the __________ is mainly concerned with policies that guide data management activities, it may further provide functionality for managing other aspects of the Big Data platform
-
what kind of encryption should be used for data at-rest and in-motion
-
the integration of new components or tools within the Big Data platform
Frage 134
Antworten
-
the adquisition of new hardware resources
-
the evolution of the Big Data platform
-
A ________ stores data in the form of connected entities where each record is called a node or a vertex and the connection between the entities is called the edge, which can be one-way or two-way
-
This prior knowledge about the structure of the data makes ___________ databases very fast at querying large datasets
Frage 135
Antworten
-
A __________ is a set of technologies that collectively provide Big Data storage and processing capabilities
-
Instead of directly using the data transfer engine, it can be indirectly invoked via a productivity portal which normally denotes ad-hoc usage
-
A ___________ is a data-driven workflow consisting of task where each task involves the input , operation and output
-
To achieve low latency data access, a memory based storage device can be used instead, however, this increases the cost of setting up the Big Data platform
Frage 136
Antworten
-
A ___________ is a data-driven workflow consisting of task where each task involves the input , operation and output
-
Each _________ consists of multiple tasks joined together in a sequencial manner such that the output of the previous task becomes the input of the next task. Such a combination of tasks denotes an single stage
-
A documentation of the _________ further helps to ascertain which maturity level of the analytics the enterprise is currently at
-
In such a situation, the _________ pattern can be applied, which requires dividing the ________ into multiple simple steps. This is executed over multiple processing runs
Frage 137
Antworten
-
The _________ compound pattern represents a fundamental solution evironment comprised of a processing ______ with data ingress, storage, processing and egress capabilities
-
A __________ can be very simple, consisting of a single stage or very complex, consisting of multiple stages
-
Data flowing in at high speeds needs to be captured instantly so that it can be processed without any delay for obtaining maximum value
-
This is because a distributed file system provides the most inexpensive form of storing large volumes of data
Frage 138
Antworten
-
The entire set of activities, from data ingestion to data egress, can be thought of as a _______ involving a range of operations from data cleasing to the computation of a statistic
-
Depending on the required functionality, a _______represents a partial or a complete Big Data solution in support of Big Data analysis
-
Poly Source
-
Poly Storage
Frage 139
Frage 140
Antworten
-
The __________ compound pattern represents a part of a Big Data solution environment capable of ingesting high-volume and high-velocity data from a range of structured, unstructured and semi-structured data sources
-
Relational Source (core)
-
An __________ represents the design and structure of a complete software system that can be deployed on its own
-
This generally represents a layered architecture where each top layer makes use of the successive bottom layer
Frage 141
Frage 142
Antworten
-
In a Big Data solution environment, quite often data needs to be imported from relational databases into the Big Data platform for various data analysis tasks
-
this can be enabled through the application of the ___________ design pattern, which involves the use of a relational data transfer engine
-
such storage devices implement functionality that automatically creates replicas of a dataset and copies them on multiple machines
-
Although expensive, _________ databases provide atomicity, consistency, isolation and durability (ACID) compliance while supporting the querying of data using Structured Query Language (SQL)
Frage 143
Antworten
-
a relational data transfer engine is used to extract data from the relational database based on an SQL Query that internally uses connectors for connecting to different relational databases
-
The ___________ design pattern is generally applied when data needs to be extracted from internal OLTP systems, operational systems, such as CRM, ERP and SCM systems, or data warehouses
-
The ___________ pattern is associated with the data transfer engine (relational), storage device, workflow engine and productivity portal mechanisms
-
A productivity portal normally encapsulates a relational data transfer engine for point-and-click import
Frage 144
Antworten
-
Finding hidden insights generally involves analyzing unstructured data, such as textual data, from internal as well as external data sources
-
Acquisition of large amounts of unstructured data from a variety of data sources can be automated through the application of the _____________ design pattern
-
is the process of compacting data in order to reduce its size, whereas decompression is the process of uncompacting data in order to bring the data back to its original size
-
the __________ provides functionality that ensures that the storage and access to data within the Big Data platform are managed throughout the lifespan of the data
Frage 145
Antworten
-
A file data transfer engine is generally used to implement this design pattern that can further be encapsulated via the productivity portal
-
Apart from textual files, images, audio and video files can also be imported through the application of this design pattern
-
Note that the ____________ pattern also covers the acquisition of semi-structured data, such as XML or JSON-formatted data
-
The ___________ pattern is associated with the data transfer engine (file), storage device, workflow engine and productivity portal mechanisms
Frage 146
Antworten
-
Data flowing in at high speeds needs to be captured instantly so that it can be processed without any delay for obtaining maximum value
-
The _________ pattern is primarily implemented by using an event data transfer engine that is built on a publish-subscribe model and further uses a queue to ensure availability and reliability
-
A _________ can also be considered a superset of the traditional data architecture, as the former includes the development of data architecture for both raw and processed data
-
However, based on the physical implementation, the third-party tools used at the analysis layer may provide the ability to create visualizations and publish them for enterprise-wide use so that different information workers and business users can turn the published information into knowledge for making informed decisions
Frage 147
Antworten
-
The _________ pattern covers both human and machice-generated data and deals exclusively with unstructured and semi-structured data
-
The Realtime Access Storage design pattern is often applied in combination with the __________ design pattern when high velocitity data needs to be analyzed in realtime
-
Based on the supported feature set, an event data transfer engine may provide some level of in-flight data cleansing and simple statistic computation, such as count, min, max functionality
-
The _________ pattern is associated with the data transfer engine (event), storage device (in-memory) and productivity portal mechanisms
Frage 148
Antworten
-
The _________ compound pattern represents a part of a Big Data solution environment capable of storing high-volume, high-velocity and high-variety data for both streaming and random access
-
Random Access Storage (core)
-
Furthermore, the storage device automatically detects when a replica becomes unavailable and recreates the lost replica from one of the available replicas
-
Usual data processing techniques employed within a ___________ include data sharding and replication where large datasets are divided and replicated across multiple machines
Frage 149
Antworten
-
Streaming Access Storage (core)
-
Realtime Access Storage (core)
-
Automatic Data Replication and Reconstruction (core)
-
Data Size Reduction (optional)
Frage 150
Antworten
-
Cloud-based Big Data Storage (optional)
-
Confidential Data Storage (optional)
-
High Volume Binary Storage
-
Visualization Engine
Frage 151
Frage
Automatic Data Replication and Reconstruction
Antworten
-
A Big Data platform generally consists of a cluster environment built using commodity hardware, which increases the chances of hardware failure
-
An entire dataset can be lost in the machine that saves the dataset becomes inavailable due to a hardware failure
-
With a ___________ architecture, in order to cope with greater resource demands for CPU and/or disk space, the only option to scale up by replacing existing machines with higher-end (expensive) machines. Scaling up allows more processing and offers greater storage
-
For health monitoring purposes, the cluster manager gathers metrics from various components running within different layers, such as the storage , processing and analysis layers, and displays their current status using a dashboard
Frage 152
Frage
Automatic Data Replication and Reconstruction
Antworten
-
To make sure that data is not lost and clients can still have access if there are hardware failures, the __________ pattern can be applied, which requires the use of eithera distributed file system or a NoSQL database
-
such storage devices implement functionality that automatically creates replicas of a dataset and copies them on multiple machines
-
Based on the supported feature set, an event data transfer engine may provide some level of in-flight data cleansing and simple statistic computation, such as count, min, max functionality
-
Note that the ____________ pattern also covers the acquisition of semi-structured data, such as XML or JSON-formatted data
Frage 153
Frage
Automatic Data Replication and Reconstruction
Antworten
-
Furthermore, the storage device automatically detects when a replica becomes unavailable and recreates the lost replica from one of the available replicas
-
The __________ pattern is also applied whenever the Dataset Decomposition and Automatic Data Sharding patterns are applied
-
The __________ is associated with the storage device mechanism
-
Dataset Type: the underlying format of the data produced by the source (structured, unstructured or semi-structured)
Frage 154
Frage
Data Size Reduction
Antworten
-
In a Big Data solution environment where large amounts of data get accumulated in a short amount of time, storing data get accumulated in a short amount of time, storing data in its raw form can quickly consume available storage and may require continuous addition of storage devices to keep increasing storage capacity
-
Also the requirements of keeping all data online and maintaining redundant storage for fault-tolerance entail more storage space
-
Although the __________ is mainly concerned with policies that guide data management activities, it may further provide functionality for managing other aspects of the Big Data platform
-
To make sense of large amounts of data and to perform exploratory data analysis in support of finding meaningful insights, it is important to correctly interpret the results obtained from data analysis
Frage 155
Frage
Data Size Reduction
Antworten
-
The _________ pattern is applied in these situations to reduce the storage footprint of data, make data transfer faster and decrease data storage cost
-
The application of the __________ pattern mainly requires the use of a compression engine
-
Although reducing storage footprint, the application of this pattern can increase the overall processing time, as data first needs decompressing. Hence, an efficient compression engine needs to be employed
-
The ________ pattern is associated with the compresssion engine, storage device, data transfer engine and processing engine mechanisms
Frage 156
Frage
Data Size Reduction
Antworten
-
With a reasonable amount of data acquisition, IT spending only increases slightly with the passage of time. As the amount of acquired data increases exponentially, there is a tendency for IT spending to increase exponentially as well
-
However, the storage capacity does not need to be increased proportionally if a data compression engine is introduced. As a result, IT spending only increases slightly
-
This prior knowledge about the structure of the data makes ___________ databases very fast at querying large datasets
-
The complete data processing cycle in Big Data environments consists of a number of activities, from data ingress to the computation of results and data egress
Frage 157
Frage
Random Access Storage
Antworten
-
The _________ compound pattern represents a part of a Big Data solution environment capable of storing high-volume and high-variety data and make-it available for indexed, ___________
-
Big Data solutions demand opposing access requirements when it comes to raw versus processed data
-
Complex ___________ may involve more than one data pipeline, for example, one for realtime data processing and the other for batch data processing
-
the _______ further help reduce cluster administration overhead and makes diagnosis more efficient
Frage 158
Frage
Random Access Storage
Antworten
-
Although raw data is normally accessed in sequential manner, processed data requires non-sequential access such that specific records, identified via a key or a field, can be accessed individually
-
To enable random write and read of data, The __________ pattern can be applied, which involves the use of a storage device in the form of a NoSQL database
-
High Volume Binary Storage
-
High Volume Tabular Storage
Frage 159
Frage
Random Access Storage
Frage 160
Frage
High Volume Binary Storage
Antworten
-
Storage of unstructured data generally involves access scenarios where partial updates of data are not performed and specific data items (records) always accessed in their entirely, such as an image or user session data
-
Such data can be treated as a BLOB that is only accessible via a unique key
-
By acting as an intermediary, the ________ provides seamless access to the Big Data platform in a secure manner without the need for custom integration
-
Note that some level of post-processing may be required to put the file in the required format before it can be copied over the target location
Frage 161
Frage
High Volume Binary Storage
Antworten
-
To provide efficient storage of such data, the ____________ pattern can be applied to stipulate the use of a storage device in the form of a key-value NoSQL database that services insert, select and delete operations
-
To achieve low latency data access, a memory based storage device can be used instead, however, this increases the cost of setting up the Big Data platform
-
The ___________ pattern is associated with the storage device (key-value) and serialization engine mechanisms
-
The __________ is the underlying technology architecture that supports the execution of multiple Big Data solutions
Frage 162
Frage
High Volume Tabular Storage
Antworten
-
In Big Data environment, large volume not only refers to tall datasets (a large number of rows) but also to wide datasets (a large number of columns)
-
In some cases, each column may itself contain a number of other columns
-
In a clustered environment, this can become cumbersome and difficult to maintain, especially where data is accessed across the enterprise with varying levels of authorization
-
The _________ pattern is associated with the data transfer engine (event), storage device (in-memory) and productivity portal mechanisms
Frage 163
Frage
High Volume Tabular Storage
Antworten
-
A relational database cannot be used in such circumstances due to a limit on columns and the inability to store more than one value in a column
-
The ___________ pattern can be applied to store such data, which stipulates the use of a storage device implemented via a column-family NoSQL database servicing insert, select, update and delete operations
-
The functionality supported by this layer relates to the operational requirements of a Big Data platform, including cluster setup, cluster expansion, system and software upgrades across the cluster and fault diagnosis and health monitoring of the cluster
-
The term logical emphasizes the fact that the description of the architecture does not bear any resemblance to the physical implementation of the system
Frage 164
Frage
High Volume Tabular Storage
Antworten
-
The use of a column-family database enables storing data in more traditional, table-like storage, where each record may further consist of a logical groups of fields that are generally accessed together
-
This pattern is associated with the storage device (column-family) and serialization engine mechanisms
-
a file data transfer engine is used directly or indirectly through the productivity portal for ad-hoc exports
-
Therefore ______ is ideal for processing semi-structured and unstructured data in support of executing analytical queries
Frage 165
Frage
High Volume Linked Storage
Antworten
-
One prominent area within the field of pattern identification is the analysis of connected entities. Due to the large volume of data in Big Data environments, efficient and timely analysis of such data requires specialized storage
-
The _________ pattern can be applied to store data consisting of linked entities. This pattern is typically implemented via the use of a storage device based on a graph NoSQL database that enables defining relationships between entities
-
However, a __________ goes beyond data curation and includes data processing, analysis and visualization technologies as well
-
The _________ pattern is associated with the processing engine, storage device, workflow engine, resource manager and coordination engine mechanisms
Frage 166
Frage
High Volume Linked Storage
Antworten
-
The use of graph NoSQL databases enables finding clusters of connected entities among a very large set of entities, investigating if entities are connected together or calculating distances between entities
-
The ________ pattern is associated with the storage device (graph) and serialization engine mechanisms
-
The ________ pattern is associated with the query engine, processing engine, storage device, resource manager and coordination engine mechanisms
-
Due to the nature of the deployed processing engine, it may not be possible to execute the entire logic as a single processing run. Even if it were possible to do so, the testing, debugging and maintenance of the logic may become difficult
Frage 167
Frage
High Volume Hierarchical Storage
Antworten
-
Semi-structured data conforming to a nested schema often requires storage in a way such that the schema structure is maintained and sub-sections of a particular data item (record) can be individually accessed and updated
-
The ________ pattern can be applied in circumstances where data represents a document-like structure that is self-describing and access to individual elements of data is required
-
A _______ may also integrate with enterprise identity and access management (IAM) systems to enable single sign-on (SSO)
-
A __________ can be defined for varying levels of software artifacts ranging from a single software library to the set of sotfware systems across the entire IT enterprise
Frage 168
Frage
High Volume Hierarchical Storage
Antworten
-
This pattern requires the use of a storage device implemented via a document NoSQL database servicing insert, select, update and delete operations. The document NoSQL database generally automatically encodes the data using a binary or a plain-text hierarchical format, such as JSON, before storage
-
The ________ pattern is associated with the storage device (document) mechanisms
-
This includes exporting results to dashboard and alerting applications (online portal), operational systems (CRM, SCM, ERP and e-commerce systems) and automated business processes (Business Process Execution Language-based processes)
-
The _______ pattern is normally applied together with the Automatic Data Replication and Reconstruction pattern so that shards are not lost in the case of a hardware failure and so that the database remains available
Frage 169
Frage
Automatic Data Sharding
Antworten
-
Storing very large dataset where they are accessed by a number of users simultaneously can seriously affect the data access performance of the underlying database
-
To counter this issue, the dataset is horizontally broken into smaller parts as prescribed by the ___________ pattern
-
The term logical emphasizes the fact that the description of the architecture does not bear any resemblance to the physical implementation of the system
-
the distributed file system automatically splits a large dataset into multiple smaller datasets that are then spread across the cluster
Frage 170
Frage
Automatic Data Sharding
Antworten
-
this pattern is enabled via a NoSQL database that automatically creates shards based on a configurable field in the dataset ans stores the shards across different machines in a cluster
-
As the dataset is distributed across multiple shards, the query completion time may be affected if the query requires collating data from more than one shard
-
Such a solution environment represents a set of multiple Big Data mechanism that collectively provide the required business functionality
-
The functionality provided by this layer corresponds to the data analysis stage of the Big Data analysis lifecycle
Frage 171
Frage
Automatic Data Sharding
Antworten
-
The _______ pattern is normally applied together with the Automatic Data Replication and Reconstruction pattern so that shards are not lost in the case of a hardware failure and so that the database remains available
-
The _______ pattern is associated with the storage device mechanism
-
Based on the supported feature set, an event data transfer engine may provide some level of in-flight data cleansing and simple statistic computation, such as count, min, max functionality
-
To make sure that data is not lost and clients can still have access if there are hardware failures, the __________ pattern can be applied, which requires the use of either a distributed file system or a NoSQL database
Frage 172
Frage
Streaming Access Storage
Antworten
-
The _______ compound pattern represents a part of a Big Data solution environment capable of storing high-volume and high-variety data and making it available for stream access
-
A large proportion of data processing tasks in Big Data involves acquiring and processing data in batches. When processing data in batches, sequential access to data is critical to timely processing. Therefore a storage device does not need to provide random access to the data, but rather ___________
-
This helps with establishing the scalability requirements of each Big Data mechanism and determining any potential performance bottlenecks
-
Once the limit is reached, the only option is to scale out. Scaling out is a Big Data processing requirement that is not supported by _________
Frage 173
Frage
Streaming Access Storage
Frage 174
Antworten
-
The _______ pattern can be applied in a scenario whereby data needs to be retrieved in a streaming or sequential manner
-
The application of this design pattern requires the use of a storage device that provides non-random write and read capabilities and is generally implemented via a distributed file system
-
However, with the passage of time, as more Big Data solutions are built and their complexity increases, additional Big Data mechanisms are introduced
-
Although raw data is normally accessed in sequential manner, processed data requires non-sequential access such that specific records, identified via a key or a field, can be accessed individually
Frage 175
Antworten
-
The _______ pattern is normally applied together with the Large-Scale Batch Processing pattern as part of a complete solution
-
the _______ pattern is associated with the storage device (distributed file system) and processing engine (batch) mechanisms.
-
Based on the type and location of the data sources, this layer may consist of more than one data transfer engine mechanism
-
This layer abstracts the processing layer with a view of making data analysis easier and further increasing the reach of the Big Data platform to data scientists and data analysts
Frage 176
Frage
Dataset Decomposition
Antworten
-
Storing large datasets as a single file does not lend itself to the distributed processing technologies deployed within the Big Data solution environment
-
Distributed processing technologies work on the principle of divide-and-conquer, requiring a dataset to be available as parts across the cluster
-
The _______ compound pattern represents a part of a Big Data solution environment capable of storing high-volume and high-variety data and making it available for stream access
-
Although reducing storage footprint, the application of this pattern can increase the overall processing time, as data first needs decompressing. Hence, an efficient compression engine needs to be employed
Frage 177
Frage
Dataset Decomposition
Antworten
-
This can be achieved through the application of the __________ pattern, which requires the use of a distributed file system storage device
-
the distributed file system automatically splits a large dataset into multiple smaller datasets that are then spread across the cluster
-
The _________ pattern is associated with the storage device and processing engine mechanisms
-
This pattern is associated with the storage device (column-family) and serialization engine mechanisms
Frage 178
Frage
Big Data Processing Environment
Antworten
-
The ___________ compound pattern represents a part of a Big Data solution environment capable of handling the range of distinct requirements of large-scale Big Data dataset processing
-
Large-Scale Batch Processing (core)
-
In some cases, each column may itself contain a number of other columns
-
Additionally, a __________ may encapsulate a visualization engine in order to provide more meaningful, graphical views of data
Frage 179
Frage
Big Data Processing Environment
Frage 180
Frage
Big Data Processing Environment
Frage 181
Frage
Big Data Processing Environment
Antworten
-
Automated Processing Metadata Insertion (optional)
-
Complex Logic Decomposition (optional)
-
Cloud-based Big Data Processing (optional)
-
High Volume Tabular Storage
Frage 182
Frage
Large-Scale Batch Processing
Antworten
-
One of the main differentiating characteristics of Big Data environments when compared with traditional data processing environments is the sheer amount of data that needs to be processed
-
Efficient processing of large amounts of data demands an offline processing strategy, as dictated by the ___________ design pattern
-
the ______ supports the deployment of new services and the addition of nodes to a cluster
-
Note that in the case of realtime data processing, the ________ also consist of in -memory storage technologies that enable fast analysis of high velocity data as it arrives
Frage 183
Frage
Large-Scale Batch Processing
Antworten
-
The application of the __________ pattern enforces processing of the entire dataset as a single processing run, which requires that the batch of data is amassed first in a storage device and only then processed using a batch processing, such as MapReduce
-
Although computed results are not immediately available, the application of this pattern enables a simple data processing solution, providing maximum throughput
-
A __________ defines the logical components required for the implementation of a Big Data analytics solution
-
A _______ may also integrate with enterprise identity and access management (IAM) systems to enable single sign-on (SSO)
Frage 184
Frage
Large-Scale Batch Processing
Antworten
-
In the case of continuosly arriving data, data is first accumulated to create a batch of data and only then processed
-
This design pattern is generally applied together with the Stream Access Storage pattern
-
The _________ pattern is associated with the processing engine (batch), data transfer engine (relational/file), storage device (disk-based), resource manager and coordination engine mechanisms
-
the set of operations that need to be performed on the data is specified as a flowchart that is then automatically executed by the workflow engine at set intervals
Frage 185
Frage
Complex Logic Decomposition
Antworten
-
Computing results for certain data processing jobs involves executing _________, such as finding the customer with the maximum spend amount based on transaction data for a large number of customers
-
Due to the nature of the deployed processing engine, it may not be possible to execute the entire logic as a single processing run. Even if it were possible to do so, the testing, debugging and maintenance of the logic may become difficult
-
However, the nature of such visualizations is different and more analysis-specific
-
The _______ compound pattern represents a part of a Big Data solution environment capable of storing high-volume and high-variety data and making it available for stream access
Frage 186
Frage
Complex Logic Decomposition
Antworten
-
In such a situation, the _________ pattern can be applied, which requires dividing the ________ into multiple simple steps. This is executed over multiple processing runs
-
Generally, these multiple processing runs are connected together using the provided functionality within the processing engine or through further application of the Automated Dataset Execution pattern
-
The _________ pattern is associated with the processing engine, storage device, workflow engine, resource manager and coordination engine mechanisms
-
makes use of commodity hardware where machines are generally networked using Local Area Network (LAN) technology
Frage 187
Frage
Processing Abstraction
Antworten
-
Processing Big Data datasets involves the use of processing engines that need programmatic skills in order to work with them
-
Due to the contemporary nature of these processing engines and the specialized processing frameworks they follow, programmers may not be conversant with the APIs of each processing engine
-
A ________ stores data in the form of connected entities where each record is called a node or a vertex and the connection between the entities is called the edge, which can be one-way or two-way
-
Due to the requirement of integrating with multiple types of components, an inteorperable and extensible cluster manager should be chosen
Frage 188
Frage
Processing Abstraction
Antworten
-
To make data processing easier by not having to deal with the intricacies of processing engines, the ________ pattern can be applied, which uses a query engine to abstract away the underlying processing engine
-
The query engine provides an easy-to-interact-with interface where the user specifies a script that is automatically converted to low-level API calls for the required processing engine
-
This work well for Big Data where a single dataset may be divided across several machines due to its volume
-
a file data transfer engine is used directly or indirectly through the productivity portal for ad-hoc exports
Frage 189
Frage
Processing Abstraction
Antworten
-
The application of the __________ pattern further increases the reach of the Big Data soltution environment to non-IT users, such as data analysts and data scientists
-
The ________ pattern is associated with the query engine, processing engine, storage device, resource manager and coordination engine mechanisms
-
For internal structured data sources, a relational data transfer engine can be used
-
The _______ pattern is associated with the storage device mechanism
Frage 190
Antworten
-
The ________ compound pattern represents a part of a Big Data solution environment capable of egressing high-volume, high-velocity and high-variety data out of the Big Data solution environment
-
Relational Sink
-
Depending on the required functionality, a _______represents a partial or a complete Big Data solution in support of Big Data analysis
-
The architecture of the entire Big Data platform that enables the execution of multiple Big Data solutions
Frage 191
Antworten
-
File-based Sink
-
Streaming Egress
-
Poly Source
-
Streaming Storage
Frage 192
Antworten
-
The majority of enterprise IT systems use relational databases as their storage backends
-
However, the method of incorporating data analysis results from a Big Data solution into such systems by first exporting results as a delimited file and then importing into the relational database takes time, is error-prone and is not a scalable solution
-
An enterprise generally starts off or is already at the descriptive or diagnostic analytics maturity level and aims to move towards the predictive or predictive analytics maturity level
-
The ________ compound pattern represents a part of a Big Data solution environment capable of egressing high-volume, high-velocity and high-variety data out of the Big Data solution environment
Frage 193
Antworten
-
the ________ pattern can be applied for directly exporting processed data to a relational database, which requires the use of a relational data transfer engine
-
Instead of directly using the data transfer engine, it can be indirectly invoked via a productivity portal which normally denotes ad-hoc usage
-
The _________ compound pattern represents a fundamental solution evironment comprised of a processing ______ with data ingress, storage, processing and egress capabilities
-
Storing very large dataset where they are accessed by a number of users simultaneously can seriously affect the data access performance of the underlying database
Frage 194
Antworten
-
A workflow engine can be used to automate the whole process and to perform data export at regular intervals
-
The _________ pattern is associated with the data transfer engine (relational), storage device, processing engine, productivity portal and workflow engine mechanisms
-
The resulting architecture is known as the __________, which includes the architecture of the Big Data solution, any connected enterprise systems and integration components
-
a file data transfer engine is used directly or indirectly through the productivity portal for ad-hoc exports
Frage 195
Antworten
-
On some occasions, data analysis results from a Big Data solution need to be incorporated into enterprise IT systems that use proprietary storage technologies, such as an embedded database or ________ storage, rather than using a relational database and provide a ________ import method
-
Like the Relational Sink pattern, manual export from the Big Data platform and import into such systems is not a viable solution
-
The typical makeup of a __________ includes the storage layer, processing layer, analysis layer and visualization layer
-
_________, a framework for processing data, requires interaction via a general purpose programming language, such as java
Frage 196
Antworten
-
The ________ pattern can be applied to automatically export data from the Big Data platform as a delimited or a hierarchical file
-
a file data transfer engine is used directly or indirectly through the productivity portal for ad-hoc exports
-
___________ - based processing platforms, such a Hadoop, do not require knowledge of data structure at load time.
-
As a result, data compression can be used to effectively increase the storage capacity of disk/memory space. In turn, this helps to reduce storage cost
Frage 197
Antworten
-
For regular exports, the file data transfer engine can be configurated via a workflow engine to run at regular intervals
-
Note that some level of post-processing may be required to put the file in the required format before it can be copied over the target location
-
The _________ pattern is associated with the data transfer engine (file), storage device, processing engine, productivity portal and workflow engine mechanisms
-
Although providing instantaneous results, setting up such as capability is not only complex but also expensive due to the reliance on memory-based storage (memory is more expensive that disk)
Frage 198
Frage
Automated Dataset Execution
Antworten
-
The complete data processing cycle in Big Data environments consists of a number of activities, from data ingress to the computation of results and data egress
-
Furthermore, in a production environment, the complete cycle needs to be repeated over and over again
-
Instead of persisting the data to a disk-based storage device, _________ persists the data to a memory-based storage device
-
Based on the type and location of the data sources, this layer may consist of more than one data transfer engine mechanism
Frage 199
Frage
Automated Dataset Execution
Antworten
-
Performing data processing activities manually is time-consuming and an inefficient use of development resources
-
To enable the automatic execution of data processing activities, the _________ pattern can be applied by implementing a workflow engine
-
However, if internal data is also integrated, the same solution will provide business-specific results
-
As a result, data compression can be used to effectively increase the storage capacity of disk/memory space. In turn, this helps to reduce storage cost
Frage 200
Frage
Automated Dataset Execution
Antworten
-
the set of operations that need to be performed on the data is specified as a flowchart that is then automatically executed by the workflow engine at set intervals
-
This pattern can also be applied together with the Complex Logic Decomposition pattern to automate the execution of multiple processing jobs
-
The ________ pattern is associated with the workflow engine, data transfer engine, storage device, processing engine, query engine and productivity portal mechanisms
-
Note that some level of post-processing may be required to put the file in the required format before it can be copied over the target location