Introduction to Hadoop

 The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models.


It is designed to scale up from single servers to thousands of machines, each offering local computation and storage.


Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. 


Source: Hadoop Official


Fancy terms used:

  1. framework
  2. distributed processing
  3. large datasets
  4. cluster
  5. simple programs
  6. scale up to thousands of machines
  7. local computing (data locality)
  8. fault-tolerance is achieved through framework, which is awesome!!!

To More information go through our Big Data Hadoop course Tutorials Blog

1 comment:

Powered by Blogger.