Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (11.4k points)

I see there are several ways we can start hadoop ecosystem,

  • start-all.sh & stop-all.sh Which say it's deprecated use start-dfs.sh & start-yarn.sh.
  • start-dfs.sh, stop-dfs.sh and start-yarn.sh, stop-yarn.sh
  • hadoop-daemon.sh namenode/datanode and yarn-deamon.sh resourcemanager

1 Answer

0 votes
by (32.3k points)
edited by

After you have logged in as the dedicated user for Hadoop(in my case it is hduser) that you must have created while installation, go to the installation folder of Hadoop(in my case it is /usr/local/hadoop). Inside the directory Hadoop, there will be a folder 'sbin', where there will be several files like start-all.sh, stop-all.sh, start-dfs.sh, stop-dfs.sh, hadoop-daemons.sh, yarn-daemons.sh, etc. Executing these files can help us start and/or stop in various ways.

  • start-all.sh & stop-all.sh: Used to start and stop hadoop daemons all at once. Issuing it on the master machine will start/stop the daemons on all the nodes of a cluster. These commands are now deprecated.

  • start-dfs.sh, stop-dfs.sh and start-yarn.sh, stop-yarn.sh: Same as above but start/stop HDFS and YARN daemons separately on all the nodes from the master machine. It is advisable to use these commands instead of start-all.sh & stop-all.sh.

  • hadoop-daemon.sh namenode/datanode and yarn-deamon.sh resourcemanager: To start individual daemons on an individual machine manually. You need to go to a particular node and supply these commands.

 If you want more information regarding Hadoop the check out this video tutorial that will help you in learning Hadoop from scratch:

Browse Categories

...