Back

Explore Courses Blog Tutorials Interview Questions
+1 vote
2 views
in Java by (750 points)
edited by

I have installed and configured Hadoop in my Linux machine. I want to run a sample MR job. I have started the Hadoop via the command /usr/local/hadoop/bin/start-all.sh and the output is

namenode running as process 7876. Stop it first.
localhost: datanode running as process 8083. Stop it first.
localhost: secondarynamenode running as process 8304. Stop it first.
jobtracker running as process 8398. Stop it first.
localhost: tasktracker running as process 8612. Stop it first.


so I think that my Hadoop is configured successfully. But when I am trying to run below command it is giving

jeet@jeet-Vostro-2520:~$ hadoop fs -put gettysburg.txt /user/jeet/getty/gettysburg.txt
hadoop: command not found

I am new to Hadoop, please help. I am sharing the screenshot of my work.

1 Answer

0 votes
by (106k points)

The reason you are getting the error Hadoop to command not found is that you have not exported Hadoop path in the environment variable. To get rid of this error you need to edit your .bashrc file. Below are the steps that you can follow to get rid of this error:

  • open ~/.bashrc file (like gedit ~/.bashrc)

  • Add the following lines in the same file and save it:

export PATH=$PATH:/usr/local/hadoop/bin/

After following the above steps you can run the Hadoop commands from any directory.

Related questions

0 votes
1 answer
asked Oct 21, 2019 in Big Data Hadoop & Spark by Kartik12234 (11.9k points)
0 votes
1 answer
0 votes
1 answer

Browse Categories

...