0 votes
1 view
in Big Data Hadoop & Spark by (19.1k points)

I am installing a Hadoop distributed mode cluster, and I have downloaded Hadoop and extracted the compressed file. While editing the configuration file, I am having some doubts. What does the fs.defaultFS property does in the core-site.xml file?

I have 3 nodes:

192.168.101.1 --> Master Machine (NameNode, SecondaryNameNode & ResourceManager daemons)

192.168.101.2 --> Slave1 (Datanode & NodeManager daemons)

192.168.103.3 --> Slave2 (Datanode & NodeManager daemons)

My configuration is as below:

<property>

   <name>fs.default.name</name>

   <value>hdfs://192.168.1.2:9000/</value>

</property>

Do we have to pass the address of the NameNode here?

1 Answer

0 votes
by (42.4k points)
fs.default.name property in core-site.xml indicates the address of the NameNode and all the HDFS command refers to this NameNode address. It tells the default HDFS address.

9000 is the port where the data node will send a heartbeat to the namenode.
...