Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (6.5k points)
Why do we remove and add nodes in a Hadoop cluster this frequently?

1 Answer

0 votes
by (11.3k points)

In Hadoop, we use commodity hardware for large clusters. The downsides to using commodity hardware are frequent failures and data node crashes. This is why data is replicated to increase the fault tolerance of the system.

Hadoop provides us with the advantage of scaling up the data nodes for high volume storage. This is why nodes are constantly added and removed in Hadoop clusters. It entirely depends on the volume and the health of nodes. 

Learn about nodes in Hadoop cluster by going in for Hadoop Online Training.

Browse Categories

...