Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (11.4k points)

I was using Hadoop in a pseudo-distributed mode and everything was working fine. But then I had to restart my computer because of some reason. And now when I am trying to start Namenode and Datanode I can find only Datanode running. Could anyone tell me the possible reason of this problem? Or am I doing something wrong?

1 Answer

0 votes
by (32.3k points)
edited by

I was facing the issue of namenode not starting. I found a solution using following:

first delete all contents from temporary folder:

rm -Rf <tmp dir> (my was /usr/local/hadoop/tmp)

format the namenode:

bin/hadoop namenode -format

start all processes again:

Since the command bin/start-all.sh is depreceated. I would suggest you to use:

Start-dfs.sh & start-yarn.sh

Now check your daemons using:

Jps

You will see your Namenode working.

If you want to know more about Hadoop, then do check out this awesome video tutorial:

Browse Categories

...