Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
in Big Data Hadoop & Spark by (11.4k points)

I see there are several ways we can start hadoop ecosystem,

  • & Which say it's deprecated use &
  •, and,
  • namenode/datanode and resourcemanager

1 Answer

0 votes
by (32.3k points)
edited by

After you have logged in as the dedicated user for Hadoop(in my case it is hduser) that you must have created while installation, go to the installation folder of Hadoop(in my case it is /usr/local/hadoop). Inside the directory Hadoop, there will be a folder 'sbin', where there will be several files like,,,,,, etc. Executing these files can help us start and/or stop in various ways.

  • & Used to start and stop hadoop daemons all at once. Issuing it on the master machine will start/stop the daemons on all the nodes of a cluster. These commands are now deprecated.

  •, and, Same as above but start/stop HDFS and YARN daemons separately on all the nodes from the master machine. It is advisable to use these commands instead of &

  • namenode/datanode and resourcemanager: To start individual daemons on an individual machine manually. You need to go to a particular node and supply these commands.

 If you want more information regarding Hadoop the check out this video tutorial that will help you in learning Hadoop from scratch:

Browse Categories