Intellipaat Back

Explore Courses Blog Tutorials Interview Questions
0 votes
2 views
in Big Data Hadoop & Spark by (11.9k points)

I am trying to set up hadoop in fully distributed mode, and to some extent I am successful in doing this.

However, I have got some doubt in one of the parameter setting in core-site.xml --> fs.defaultFS

In my set up, I have three nodes as described below:

Node1 -- 192.168.1.2 --> Configured to be Master (Running ResourceManager and NameNode daemons)

Node2 -- 192.168.1.3 --> Configured to be Slave (Running NodeManager and Datanode daemons)

Node3 -- 192.168.1.4 --> Configured to be Slave (Running NodeManager and Datanode daemons)

Now what does property fs.defaultFS mean? For example, if I set it like this:

<property>

   <name>fs.default.name</name>

   <value>hdfs://192.168.1.2:9000/</value>

</property>

I am not able to understand the meaning of hdfs://192.168.1.2:9000. I can figure out that hdfs would mean that we are using hdfs file system, but what does the other parts means?

Does this mean that the host with IP address 192.168.1.2 is running the Namenode at port 9000?

Can anyone help me understand this?

1 Answer

0 votes
by (32.1k points)

Try using the following code:

<property>

   <name>fs.default.name</name>

   <value>hdfs://192.168.1.2:9000/</value>

</property>

31k questions

32.8k answers

501 comments

693 users

Browse Categories

...