0 votes
1 view
in Big Data Hadoop & Spark by (5k points)
Why do we remove and add nodes in a Hadoop cluster this frequently?

1 Answer

0 votes
by (11.1k points)

In Hadoop, we use commodity hardware for large clusters. The downsides to using commodity hardware are frequent failures and data node crashes. This is why data is replicated to increase the fault tolerance of the system.

Hadoop provides us with the advantage of scaling up the data nodes for high volume storage. This is why nodes are constantly added and removed in Hadoop clusters. It entirely depends on the volume and the health of nodes. 

Learn about nodes in Hadoop cluster by going in for Hadoop Online Training.

Welcome to Intellipaat Community. Get your technical queries answered by top developers !


Categories

...