The Apache Hadoop develops open source software for reliable, scalable, and distributed computing.
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.
The project includes these modules:
•Hadoop Common: The common utilities that support the other Hadoop modules.
•Hadoop Distributed File System (HDFS): A distributed file system that provides high-throughput access to application data.
•Hadoop YARN: A framework for job scheduling and cluster resource management.
•Hadoop MapReduce: A YARN-based system for parallel processing of large data sets.
Other Hadoop-related projects at Apache include:
•Ambari™: A web-based tool that provisions, manages, and monitors Apache Hadoop clusters which include support for Hadoop HDFS, Hadoop MapReduce, Hive, HCatalog, HBase, ZooKeeper, Oozie, Pig and Sqoop. Ambari also provides a dashboard for viewing cluster health such as heatmaps and ability to view MapReduce, Pig and Hive applications visually along with features to diagnose their performance characteristics in a user-friendly manner.
•Avro™: A data serialization system.
•Cassandra™: A scalable multi-master database with no single points of failure.
•Chukwa™: A data collection system for managing large distributed systems.
•HBase™: A scalable, distributed database that supports structured data storage for large tables.
•Hive™: A data warehouse infrastructure that provides data summarization and ad hoc querying.
•Mahout™: A Scalable machine learning and data mining library.
•Pig™: A high-level data-flow language and execution framework for parallel computation.
•ZooKeeper™: A high-performance coordination service for distributed applications.
Hadoop cluster can use Kerberos authentication to verify user accounts. You can use Kerberos authentication with Data Integration, with the Hadoop cluster, or with both. Kerberos is a network authentication protocol which uses tickets to authenticate access to services and nodes in a network. Kerberos uses a Key Distribution Center to validate the identities of users and services and to grant tickets to authenticated user and service accounts. Users and services are known as principals. The Key Distribution Center has a database of principals and their associated secret keys that are used as proof of identity.