Hadoop Connector > Introduction to Hadoop Connector > Hadoop Description
  

Hadoop Description

The Apache Hadoop develops open source software for reliable, scalable, and distributed computing.
The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures.
The project includes these modules:
Other Hadoop-related projects at Apache include:
Hadoop cluster can use Kerberos authentication to verify user accounts. You can use Kerberos authentication with Data Integration, with the Hadoop cluster, or with both. Kerberos is a network authentication protocol which uses tickets to authenticate access to services and nodes in a network. Kerberos uses a Key Distribution Center to validate the identities of users and services and to grant tickets to authenticated user and service accounts. Users and services are known as principals. The Key Distribution Center has a database of principals and their associated secret keys that are used as proof of identity.