Support Questions

Find answers, ask questions, and share your expertise

when try to Format namenode ,its format the tempory one

avatar
Explorer

I need to formate the namenode. So i use below command to formate the namenode.

./bin/hadoop namenode -format

When i run the above command it gives me ,it formatted below formatted.But my namenode is located in different directory.Why its formatting temp forlder namenode.

17/02/18 10:17:17 INFO common.Storage: Storage directory /tmp/hadoop-aruna/dfs/name has been successfully formatted.

Below is the my .bashrc file configuration

#SET Hadoop Related Envirment Variable
export HADOOP_HOME=/home/aruna/hadoop-2.7.3
export HADOOP_CONF_DIR=/home/aruna/hadoop-2.7.3/etc/hadoop
export HADOOP_MAPRED_HOME=/home/aruna/hadoop-2.7.3
export HADOOP_COMMON_HOME=/home/aruna/hadoop-2.7.3
export HADOOP_HDFS_HOME=/home/aruna/hadoop-2.7.3
export YARN_HOME=/home/aruna/hadoop-2.7.3
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"

#Set Java Home
export JAVA_HOME=/usr/lib/jvm/java-7-oracle
export PATH=$PATH:/usr/lib/jvm/java-7-oracle/bin

#Set Hadoop bin  directory PATH
export PATH=$PATH:/home/aruna/hadoop-2.7.3/bin
export HADOOP_PID_DIR=/home/aruna/hadoop-2.7.3/hadoop2_data/hdfs/pid

normally namenode should be in below path.

/home/aruna/hadoop-2.7.3/hadoop2_data/hdfs/namenode
1 ACCEPTED SOLUTION

avatar
Master Mentor

@Aruna Sameera

The "hadoop namenode -format" command formats the specified NameNode. It starts the NameNode, formats it and then shut it down.

dfs.namenode.name.dir: Determines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy. with default value ${hadoop.tmp.dir}/dfs/name.

dfs.datanode.data.dir: (default: file://${hadoop.tmp.dir}/dfs/data) - Determines where on the local filesystem an DFS data node should store its blocks.

hadoop.tmp.dir: "/tmp/hadoop-${user.name}" (Default) - A base for other temporary directories.

.

Please check the values of above parameters defined in your core-site.xml/hdfs-site.xml

.

Ref:

https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/core-default.xml

The Default values are determined as mentioned in the link:

https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-...

        <property>
          <name>dfs.namenode.name.dir</name>
          <value>file://${hadoop.tmp.dir}/dfs/name</value>
          <description>Determines where on the local filesystem the DFS name node
              should store the name table(fsimage).  If this is a comma-delimited list
              of directories then the name table is replicated in all of the
              directories, for redundancy. </description>
        </property>

.

View solution in original post

10 REPLIES 10

avatar
Contributor

@Aruna,

delete all files and directories in /home/aruna/hadoop-2.7.3/hadoop2_data/hdfs/datanode

restart the cluster.

Still if you are having problem,

Please follow these steps

1. create the following directories in hadoop-2.7.3.

name

data

2. add the following lines in hdfs-site.xml

<property>
	<name> dfs.namenode.name.dir </name>
	<value>file://home/aruna/hadoop-2.7.3/name</value>
</property>

<property>
	<name>dfs.datanode.data.dir</name>
	<value>file://home/aruna/hadoop-2.7.3/data</value>
</property>

3. delete all log files in hadoop-2.7.3/logs

4. format namenode

5. start cluster

Still you have problem reply with log files

All the best