Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (11.9k points)

i am new to learning hdfs and have single node hadoop(version 2.2.0) set up over centos box.

after start-all command i am trying to run some of the hdfs commands but below mentioned is not working.

    bin/hadoop fs -lsr hdfs://localhost:9000/tmp/hadoop-root/dfs/name

while this command is working

    bin/hadoop fs -lsr file:///tmp/hadoop-root/dfs/name

this is my core-site.xml file

<configuration>

 <property>

         <name>fs.default.name</name>

         <value>hdfs://localhost:9000</value>

     </property>

</configuration>

mapred-site.xml file

<configuration>

     <property>

         <name>mapred.job.tracker</name>

         <value>localhost:9001</value>

     </property>

</configuration>

hdfs-site.xml file

<configuration>

<property>

         <name>dfs.replication</name>

         <value>1</value>

     </property>

</configuration>

telnet with localhost 9000 is working while telnet with x.x.x.x 9000 is not working also.

Can anyone tell me pls. where is my mistake ?

1 Answer

0 votes
by (32.1k points)

HDFS is a filesystem, so use it as a filesystem:

hadoop fs -ls /

hadoop fs -ls /some/path/inside/hdfs

I also suggest you to use only fully qualified names for hosts in your configuration files. Simply said, don't use localhost.

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...