Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (9k points)
What is Apache Hadoop?

1 Answer

0 votes
by (45.3k points)
edited by

Apache Hadoop is an open-source framework that is designed to store and process large data sets across a cluster of machines and computers. It uses simple programming models to process the given data sets. It provides huge storage space for any type of data, processing power and also the ability to take care of concurrent tasks.  It consists of the following components:

  • Hadoop Distributed File System (HDFS): It is used to store the data.
  • Yarn: It is used for job scheduling as well as a cluster resource management.
  • MapReduce: It is used to perform parallel processing.
  • Common Libraries: They are libraries required by subsystems of Hadoop.

If you want to learn more about Hadoop then you must read Hadoop Tutorial.

Enroll in this Big Data Hadoop certification training to learn from the experts.

You should watch this Big Data Hadoop tutorial video to become proficient in Hadoop: 

Browse Categories

...