Back

Explore Courses Blog Tutorials Interview Questions
+6 votes
3 views
in Big Data Hadoop & Spark by (190 points)
recategorized by
Even I was wondering where can I keep hadoop.tmp.dir. There is no proper explanation I found for this. Kindly help.

2 Answers

+13 votes
by (13.2k points)
edited by

Hadoop.tmp.dir - base for other temporary directories, it a property that you can to set in core-site.xml like a export in linux.

For an example -

<name>dfs.namenode.name.dir</name>
<value>file://${hadoop.tmp.dir}/dfs/name</value>


You can take reference of hadoop.tmp.dir in hdfs-site.xml from the above code.

For more you can check core-site.xml -

https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-common/core-default.xml and hdfs-site.xml

https://hadoop.apache.org/docs/r2.7.2/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml

HDFS mainly has two modes through which it works-
1. distributed (multi-node cluster)
2. pseudo-distributed (cluster of one single machine).
 
HDFS properties which contain hadoop.tmp.dir in their values are as follows-
1. dfs.name.dir: directory, here metadata is stored by the namenode having the default value ${hadoop.tmp.dir}/dfs/name.
2. dfs.data.dir: directory, here HDFS data blocks are stored having the default value ${hadoop.tmp.dir}/dfs/data.
3. fs.checkpoint.dir: directory. Here checkpoints are stored by the secondary namenode, having the default value as ${hadoop.tmp.dir}/dfs/namesecondary.

0 votes
by (108k points)

hadoop.tmp.dir is the location where hdfs reserves all of its data. For more information regarding the same, refer the following link:

Browse Categories

...