• Articles
  • Tutorials
  • Interview Questions

Hadoop Multi Node Clusters

Setting up Hadoop Multi-Node Cluster

Installing Java
Syntax of java version command

$ java -version

 Following output is presented.

java version "1.7.0_71" 
Java(TM) SE Runtime Environment (build 1.7.0_71-b13)
Java HotSpot(TM) Client VM (build 25.0-b02, mixed mode)

Creating User Account
System user account on both master and slave systems should be created to use the Hadoop installation.

# useradd hadoop 
# passwd hadoop

Mapping the nodes
hosts file should be edited in /etc/ folder on all nodes and IP address of each system followed by their host names must be specified.

# vi /etc/hosts

 Enter the following lines in the /etc/hosts file. hadoop-master hadoop-slave-1 hadoop-slave-2

Certification in Bigdata Analytics

Configuring Key Based Login
Ssh should be setup in each node such that they can converse with one another without any prompt for password.

# su hadoop 
$ ssh-keygen -t rsa 
$ ssh-copy-id -i ~/.ssh/id_rsa.pub tutorialspoint@hadoop-master
$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop_tp1@hadoop-slave-1 
$ ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop_tp2@hadoop-slave-2
$ chmod 0600 ~/.ssh/authorized_keys 
$ exit

Installing Hadoop
Hadoop should be downloaded in the master server.

# mkdir /opt/hadoop 
# cd /opt/hadoop/ 
# wget http://apache.mesi.com.ar/hadoop/common/hadoop-1.2.1/hadoop-1.2.0.tar.gz 
# tar -xzf hadoop-1.2.0.tar.gz 
# mv hadoop-1.2.0 hadoop 
# chown -R hadoop /opt/hadoop 
# cd /opt/hadoop/hadoop/

Configuring Hadoop
Hadoop server must be configured
core-site.xml should be edited.


hdfs-site.xml file should be editted.


mapred-site.xml file should be editted.


JAVA_HOME, HADOOP_CONF_DIR, and HADOOP_OPTS should be edited. 

export JAVA_HOME=/opt/jdk1.7.0_17 
export HADOOP_OPTS=-Djava.net.preferIPv4Stack=true
export HADOOP_CONF_DIR=/opt/hadoop/hadoop/conf 

Installing Hadoop on Slave Servers
Hadoop should be installed on all the slave servers

# su hadoop 
$ cd /opt/hadoop 
$ scp -r hadoop hadoop-slave-1:/opt/hadoop
$ scp -r hadoop hadoop-slave-2:/opt/hadoop

Configuring Hadoop on Master Server
Master server should be  configured

# su hadoop 
$ cd /opt/hadoop/hadoop

Master Node Configuration

$ vi etc/hadoop/masters  

Slave Node Configuration

$ vi etc/hadoop/slaves 

Name Node format on Hadoop Master

# su hadoop 
$ cd /opt/hadoop/hadoop 
$ bin/hadoop namenode –format
11/10/14 10:58:07 INFO namenode.NameNode: STARTUP_MSG:
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = hadoop-master/
STARTUP_MSG: args = [-format]
STARTUP_MSG: version = 1.2.0
STARTUP_MSG: build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.2 -r 1479473; compiled by 'hortonfo' on Mon May 6 06:59:37 UTC 2013
STARTUP_MSG: java = 1.7.0_71
11/10/14 10:58:08 INFO util.GSet: Computing capacity for map BlocksMap editlog=/opt/hadoop/hadoop/dfs/name/current/edits
11/10/14 10:58:08 INFO common.Storage: Storage directory /opt/hadoop/hadoop/dfs/name has been successfully formatted.
11/10/14 10:58:08 INFO namenode.NameNode: SHUTDOWN_MSG: /************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop-master/

Hadoop Services
Starting Hadoop services on the Hadoop-Master.

$ cd $HADOOP_HOME/sbin 
$ start-all.sh


Become a Big Data Architect

Addition of a New DataNode in the Hadoop Cluster
Add new nodes to an existing Hadoop Cluster with some suitable network configuration. suppose the following network configuration.
For New node Configuration:

IP address :
netmask : 
hostname : slave3.in

Adding a User and SSH Access
Add a User
“hadoop” user must be added and the password of the Hadoop user can be set to anything one wants.

useradd hadoop
passwd hadoop

To be executed on the master

mkdir -p $HOME/.ssh
chmod 700 $HOME/.ssh 
ssh-keygen -t rsa -P '' -f $HOME/.ssh/id_rsa 
cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys
chmod 644 $HOME/.ssh/authorized_keys 
Copy the public key to new slave node in hadoop user $HOME directory 
scp $HOME/.ssh/id_rsa.pub [email protected]:/home/hadoop/ 

To be executed on slaves

su hadoop ssh -X [email protected]

The content of the public key must be copied into the file “$HOME/.ssh/authorized_keys” and then the permission for the same must be changed.

cd $HOME 
mkdir -p $HOME/.ssh
chmod 700 $HOME/.ssh  
cat id_rsa.pub >>$HOME/.ssh/authorized_keys
chmod 644 $HOME/.ssh/authorized_keys

 ssh login must be changed from the master machine. The possibility of ssh to the new node without a password from the master must be verified.

ssh [email protected] or hadoop@slave3 

Set Hostname of New Node
Hostname is set in file /etc/sysconfig/network

On new slave3 machine 

Machine must be restarted or hostname command should be run to a new machine with the respective hostname to make changes effective.
On slave3 node machine:
hostname slave3.in
/etc/hosts must be updated on all machines of the cluster slave3.in slave3

 ping the machine with hostnames to check whether it is resolving to IP.

ping master.in

Learn new Technologies

Start the DataNode on New Node

The Datanode daemon should be started manually using $HADOOP_HOME/bin/hadoop-daemon.sh script. Master(NameNode) should join the cluster after being automatically contacted. The new node should be added to the conf/slaves file in the master server. A new node will be recognized by script-based commands.
Login to the new node

su hadoop or ssh -X [email protected]

HDFS is started on a newly added slave node

./bin/hadoop-daemon.sh start datanode

jps command output must be checked on a new node.

$ jps 
7141 DataNode 
10312 Jps

Removing a DataNode
A node can be removed from a cluster as it is running, without any data loss. A decommissioning feature is made available by HDFS which ensures that removing a node is performed securely.
Step 1
Login to the master machine user where Hadoop is installed.

$ su hadoop

Step 2
Before starting the cluster an exclude file must be configured. A key named dfs.hosts.exclude should be added to our$HADOOP_HOME/etc/hadoop/hdfs-site.xmlfile.
NameNode’s local file system which contains a list of machines that are not permitted to connect to HDFS receives the full path by this key and the value associated with it.

<name>dfs.hosts.exclude</name><value>/home/hadoop/hadoop-1.2.1/hdfs_exclude.txt</value><description>>DFS exclude</description>

Step 3
Hosts to decommission are determined.
Additions should be made to files recognized by the hdfs_exclude.txt for every machine to be decommissioned which will prevent them from connecting to the NameNode.


Step 4
Force configuration reload.
“$HADOOP_HOME/bin/hadoop dfsadmin -refreshNodes” should be run

$ $HADOOP_HOME/bin/hadoop dfsadmin -refreshNodes

NameNode will be forced to re-read its configuration, this is inclusive of the newly updated ‘excludes’ file. Nodes will be decommissioned over a period of time, allowing time for each node’s blocks to be replicated onto machines which are scheduled to remain active.
jps command output should be checked on slave2.in. DataNode process will shutdown automatically.
Step 5
Shutdown nodes.
The decommissioned hardware can be carefully shut down for maintenance after the decommissioning process has been finished.

$ $HADOOP_HOME/bin/hadoop dfsadmin -report

Step 6
Excludes are edited again and once the machines have been decommissioned, they can be removed from the ‘excludes’ file. “$HADOOP_HOME/bin/hadoop dfsadmin -refreshNodes” will read the excludes file back into the NameNode;DataNodes will rejoin the cluster after the maintenance has been completed, or if additional capacity is needed in the cluster again.
To run/shutdown tasktracker

$ $HADOOP_HOME/bin/hadoop-daemon.sh stop tasktracker
$ $HADOOP_HOME/bin/hadoop-daemon.sh start tasktracker

Course Schedule

Name Date Details
Big Data Course 15 Jun 2024(Sat-Sun) Weekend Batch
View Details
Big Data Course 22 Jun 2024(Sat-Sun) Weekend Batch
View Details
Big Data Course 29 Jun 2024(Sat-Sun) Weekend Batch
View Details

About the Author

Technical Reseach Analyst - Data Engineering

Abhijit is a Technical Research Analyst specializing in Deep Learning. He holds a degree in Computer Science with a focus on Data Science. Being proficient in Python, Scala, C++, Dart, and R, he is passionate about new-age technologies. Abhijit crafts insightful analyses and impactful content, bridging the gap between cutting-edge research and practical applications.