Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

when try to Format namenode ,its format the tempory one

avatar
Explorer

I need to formate the namenode. So i use below command to formate the namenode.

./bin/hadoop namenode -format

When i run the above command it gives me ,it formatted below formatted.But my namenode is located in different directory.Why its formatting temp forlder namenode.

17/02/18 10:17:17 INFO common.Storage: Storage directory /tmp/hadoop-aruna/dfs/name has been successfully formatted.

Below is the my .bashrc file configuration

#SET Hadoop Related Envirment Variable
export HADOOP_HOME=/home/aruna/hadoop-2.7.3
export HADOOP_CONF_DIR=/home/aruna/hadoop-2.7.3/etc/hadoop
export HADOOP_MAPRED_HOME=/home/aruna/hadoop-2.7.3
export HADOOP_COMMON_HOME=/home/aruna/hadoop-2.7.3
export HADOOP_HDFS_HOME=/home/aruna/hadoop-2.7.3
export YARN_HOME=/home/aruna/hadoop-2.7.3
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"

#Set Java Home
export JAVA_HOME=/usr/lib/jvm/java-7-oracle
export PATH=$PATH:/usr/lib/jvm/java-7-oracle/bin

#Set Hadoop bin  directory PATH
export PATH=$PATH:/home/aruna/hadoop-2.7.3/bin
export HADOOP_PID_DIR=/home/aruna/hadoop-2.7.3/hadoop2_data/hdfs/pid

normally namenode should be in below path.

/home/aruna/hadoop-2.7.3/hadoop2_data/hdfs/namenode
1 ACCEPTED SOLUTION

avatar
Master Mentor

@Aruna Sameera

The "hadoop namenode -format" command formats the specified NameNode. It starts the NameNode, formats it and then shut it down.

dfs.namenode.name.dir: Determines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy. with default value ${hadoop.tmp.dir}/dfs/name.

dfs.datanode.data.dir: (default: file://${hadoop.tmp.dir}/dfs/data) - Determines where on the local filesystem an DFS data node should store its blocks.

hadoop.tmp.dir: "/tmp/hadoop-${user.name}" (Default) - A base for other temporary directories.

.

Please check the values of above parameters defined in your core-site.xml/hdfs-site.xml

.

Ref:

https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/core-default.xml

The Default values are determined as mentioned in the link:

https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-...

        <property>
          <name>dfs.namenode.name.dir</name>
          <value>file://${hadoop.tmp.dir}/dfs/name</value>
          <description>Determines where on the local filesystem the DFS name node
              should store the name table(fsimage).  If this is a comma-delimited list
              of directories then the name table is replicated in all of the
              directories, for redundancy. </description>
        </property>

.

View solution in original post

10 REPLIES 10

avatar
Master Mentor

@Aruna Sameera

The "hadoop namenode -format" command formats the specified NameNode. It starts the NameNode, formats it and then shut it down.

dfs.namenode.name.dir: Determines where on the local filesystem the DFS name node should store the name table(fsimage). If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy. with default value ${hadoop.tmp.dir}/dfs/name.

dfs.datanode.data.dir: (default: file://${hadoop.tmp.dir}/dfs/data) - Determines where on the local filesystem an DFS data node should store its blocks.

hadoop.tmp.dir: "/tmp/hadoop-${user.name}" (Default) - A base for other temporary directories.

.

Please check the values of above parameters defined in your core-site.xml/hdfs-site.xml

.

Ref:

https://hadoop.apache.org/docs/r2.7.3/hadoop-project-dist/hadoop-common/core-default.xml

The Default values are determined as mentioned in the link:

https://github.com/apache/hadoop/blob/trunk/hadoop-hdfs-project/hadoop-hdfs/src/main/resources/hdfs-...

        <property>
          <name>dfs.namenode.name.dir</name>
          <value>file://${hadoop.tmp.dir}/dfs/name</value>
          <description>Determines where on the local filesystem the DFS name node
              should store the name table(fsimage).  If this is a comma-delimited list
              of directories then the name table is replicated in all of the
              directories, for redundancy. </description>
        </property>

.

avatar
Explorer

Anyway i followed all the steps in below tutorial.

https://www.youtube.com/watch?v=l1QmEPEAems

And then when i try to run the sudo jps command it shows below result.

aruna@aruna:~/hadoop-2.7.3$ sudo jps
[sudo] password for aruna: 
16241 ResourceManager
19010 Jps
16486 NodeManager
16071 SecondaryNameNode
15735 NameNode
16697 JobHistoryServer

Only datanode is missing. How can i start datanode ?

avatar
Master Mentor

@Aruna Sameera

In order to start/stop datanode separately you can try using the "sbin/start-dfs.sh" and "sbin/stop-dfs.sh" scripts.

And if datanode is still not starting then check the datanode log.

avatar
Explorer

I tried jay, But seems its not starting the datanode

aruna@aruna:~/hadoop-2.7.3/sbin$ ./start-dfs.sh
Starting namenodes on [localhost]
aruna@localhost's password: 
localhost: namenode running as process 15735. Stop it first.
aruna@localhost's password: 
localhost: starting datanode, logging to /home/aruna/hadoop-2.7.3/logs/hadoop-aruna-datanode-aruna.out
Starting secondary namenodes [0.0.0.0]
aruna@0.0.0.0's password: 
0.0.0.0: secondarynamenode running as process 16071. Stop it first.
aruna@aruna:~/hadoop-2.7.3/sbin$ sudo jps
16241 ResourceManager
16486 NodeManager
16071 SecondaryNameNode
15735 NameNode
16697 JobHistoryServer
20620 Jps
aruna@aruna:~/hadoop-2.7.3/sbin$ 

avatar
Master Mentor

@Aruna Sameera

Try using: "stop-all.sh" first to stop all processes.

Keep checking the logs (Specially the Datanode log/out)

Then try starting then using "start-all.sh". If datanode still does not stop then please share the out file of datanode.

Also try starting it using the following command to individually start them:

hadoop-daemon.sh start namenode
hadoop-daemon.sh start datanode

.

avatar
Explorer

1) Stopping all

aruna@aruna:~/hadoop-2.7.3/sbin$ ./stop-all.sh
This script is Deprecated. Instead use stop-dfs.sh and stop-yarn.sh
Stopping namenodes on [localhost]
aruna@localhost's password: 
localhost: stopping namenode
aruna@localhost's password: 
localhost: no datanode to stop
Stopping secondary namenodes [0.0.0.0]
aruna@0.0.0.0's password: 
0.0.0.0: stopping secondarynamenode
stopping yarn daemons
stopping resourcemanager
aruna@localhost's password: 
localhost: stopping nodemanager
no proxyserver to stop

2) Start all

aruna@aruna:~/hadoop-2.7.3/sbin$ ./start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [localhost]
aruna@localhost's password: 
localhost: starting namenode, logging to /home/aruna/hadoop-2.7.3/logs/hadoop-aruna-namenode-aruna.out
aruna@localhost's password: 
localhost: starting datanode, logging to /home/aruna/hadoop-2.7.3/logs/hadoop-aruna-datanode-aruna.out
Starting secondary namenodes [0.0.0.0]
aruna@0.0.0.0's password: 
0.0.0.0: starting secondarynamenode, logging to /home/aruna/hadoop-2.7.3/logs/hadoop-aruna-secondarynamenode-aruna.out
starting yarn daemons
starting resourcemanager, logging to /home/aruna/hadoop-2.7.3/logs/yarn-aruna-resourcemanager-aruna.out
aruna@localhost's password: 
localhost: starting nodemanager, logging to /home/aruna/hadoop-2.7.3/logs/yarn-aruna-nodemanager-aruna.out


3)Check the status, but here Namenode missing now? but is above log shows starting namenode.

aruna@aruna:~/hadoop-2.7.3/sbin$ sudo jps
[sudo] password for aruna: 
22097 ResourceManager
22404 NodeManager
21751 DataNode
16697 JobHistoryServer
21934 SecondaryNameNode
22542 Jps

avatar
Explorer

I stop all the demons and try format again as below

aruna@aruna:~/hadoop-2.7.3$ bin/hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

17/02/18 12:46:37 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = aruna/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.7.3
STARTUP_MSG:   classpath = /home/aruna/hadoop-2.7.3/etc/hadoop:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/home/aruna/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/home/aruna/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/home/aruna/hadoop-2.7.3/contrib/capacity-scheduler/*.jar:/home/aruna/hadoop-2.7.3/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z
STARTUP_MSG:   java = 1.7.0_80
************************************************************/
17/02/18 12:46:37 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
17/02/18 12:46:37 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-978c9d9b-0b1b-45ff-9fe5-3d45b68ca0ca
17/02/18 12:46:39 INFO namenode.FSNamesystem: No KeyProvider found.
17/02/18 12:46:39 INFO namenode.FSNamesystem: fsLock is fair:true
17/02/18 12:46:39 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
17/02/18 12:46:39 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
17/02/18 12:46:39 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
17/02/18 12:46:39 INFO blockmanagement.BlockManager: The block deletion will start around 2017 Feb 18 12:46:39
17/02/18 12:46:39 INFO util.GSet: Computing capacity for map BlocksMap
17/02/18 12:46:39 INFO util.GSet: VM type       = 64-bit
17/02/18 12:46:39 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
17/02/18 12:46:39 INFO util.GSet: capacity      = 2^21 = 2097152 entries
17/02/18 12:46:39 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
17/02/18 12:46:39 INFO blockmanagement.BlockManager: defaultReplication         = 1
17/02/18 12:46:39 INFO blockmanagement.BlockManager: maxReplication             = 512
17/02/18 12:46:39 INFO blockmanagement.BlockManager: minReplication             = 1
17/02/18 12:46:39 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
17/02/18 12:46:39 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
17/02/18 12:46:39 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
17/02/18 12:46:39 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
17/02/18 12:46:39 INFO namenode.FSNamesystem: fsOwner             = aruna (auth:SIMPLE)
17/02/18 12:46:39 INFO namenode.FSNamesystem: supergroup          = supergroup
17/02/18 12:46:39 INFO namenode.FSNamesystem: isPermissionEnabled = true
17/02/18 12:46:39 INFO namenode.FSNamesystem: HA Enabled: false
17/02/18 12:46:39 INFO namenode.FSNamesystem: Append Enabled: true
17/02/18 12:46:39 INFO util.GSet: Computing capacity for map INodeMap
17/02/18 12:46:39 INFO util.GSet: VM type       = 64-bit
17/02/18 12:46:39 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
17/02/18 12:46:39 INFO util.GSet: capacity      = 2^20 = 1048576 entries
17/02/18 12:46:39 INFO namenode.FSDirectory: ACLs enabled? false
17/02/18 12:46:39 INFO namenode.FSDirectory: XAttrs enabled? true
17/02/18 12:46:39 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
17/02/18 12:46:39 INFO namenode.NameNode: Caching file names occuring more than 10 times
17/02/18 12:46:39 INFO util.GSet: Computing capacity for map cachedBlocks
17/02/18 12:46:39 INFO util.GSet: VM type       = 64-bit
17/02/18 12:46:39 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
17/02/18 12:46:39 INFO util.GSet: capacity      = 2^18 = 262144 entries
17/02/18 12:46:39 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
17/02/18 12:46:39 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
17/02/18 12:46:39 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
17/02/18 12:46:39 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
17/02/18 12:46:39 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
17/02/18 12:46:39 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
17/02/18 12:46:39 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
17/02/18 12:46:39 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
17/02/18 12:46:39 INFO util.GSet: Computing capacity for map NameNodeRetryCache
17/02/18 12:46:39 INFO util.GSet: VM type       = 64-bit
17/02/18 12:46:39 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
17/02/18 12:46:39 INFO util.GSet: capacity      = 2^15 = 32768 entries
17/02/18 12:46:39 INFO namenode.FSImage: Allocated new BlockPoolId: BP-316516397-127.0.1.1-1487393199699
17/02/18 12:46:39 INFO common.Storage: Storage directory /tmp/hadoop-aruna/dfs/name has been successfully formatted.
17/02/18 12:46:39 INFO namenode.FSImageFormatProtobuf: Saving image file /tmp/hadoop-aruna/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
17/02/18 12:46:40 INFO namenode.FSImageFormatProtobuf: Image file /tmp/hadoop-aruna/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 352 bytes saved in 0 seconds.
17/02/18 12:46:40 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
17/02/18 12:46:40 INFO util.ExitUtil: Exiting with status 0
17/02/18 12:46:40 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at aruna/127.0.1.1
************************************************************/

As shows above it formats the below folder

17/02/18 12:46:39 INFO common.Storage: Storage directory /tmp/hadoop-aruna/dfs/name has been successfully formatted.

why it formatting temp folder. I try with deleting temp folder values also.But still not reolved.

avatar
Explorer

This is the list of log files with sizes.

aruna@aruna:~/hadoop-2.7.3/logs$ ls -l
total 9800
-rw-rw-r-- 1 aruna aruna 5888083 Feb 18 13:16 hadoop-aruna-datanode-aruna.log
-rw-rw-r-- 1 aruna aruna     717 Feb 18 13:16 hadoop-aruna-datanode-aruna.out
-rw-rw-r-- 1 aruna aruna     717 Feb 18 13:15 hadoop-aruna-datanode-aruna.out.1
-rw-rw-r-- 1 aruna aruna     717 Feb 18 13:13 hadoop-aruna-datanode-aruna.out.2
-rw-rw-r-- 1 aruna aruna     717 Feb 18 13:12 hadoop-aruna-datanode-aruna.out.3
-rw-rw-r-- 1 aruna aruna     717 Feb 18 13:11 hadoop-aruna-datanode-aruna.out.4
-rw-rw-r-- 1 aruna aruna     717 Feb 18 13:10 hadoop-aruna-datanode-aruna.out.5
-rw-rw-r-- 1 aruna aruna  321675 Feb 18 13:23 hadoop-aruna-namenode-aruna.log
-rw-rw-r-- 1 aruna aruna     717 Feb 18 13:13 hadoop-aruna-namenode-aruna.out
-rw-rw-r-- 1 aruna aruna     717 Feb 18 13:06 hadoop-aruna-namenode-aruna.out.1
-rw-rw-r-- 1 aruna aruna     717 Feb 18 12:42 hadoop-aruna-namenode-aruna.out.2
-rw-rw-r-- 1 aruna aruna     717 Feb 18 12:41 hadoop-aruna-namenode-aruna.out.3
-rw-rw-r-- 1 aruna aruna     717 Feb 18 12:21 hadoop-aruna-namenode-aruna.out.4
-rw-rw-r-- 1 aruna aruna     717 Feb 18 09:47 hadoop-aruna-namenode-aruna.out.5
-rw-rw-r-- 1 aruna aruna 2886271 Feb 18 13:23 hadoop-aruna-secondarynamenode-aruna.log
-rw-rw-r-- 1 aruna aruna     717 Feb 18 13:16 hadoop-aruna-secondarynamenode-aruna.out
-rw-rw-r-- 1 aruna aruna     717 Feb 18 13:13 hadoop-aruna-secondarynamenode-aruna.out.1
-rw-rw-r-- 1 aruna aruna     717 Feb 18 13:11 hadoop-aruna-secondarynamenode-aruna.out.2
-rw-rw-r-- 1 aruna aruna     717 Feb 18 13:08 hadoop-aruna-secondarynamenode-aruna.out.3
-rw-rw-r-- 1 aruna aruna     717 Feb 18 13:06 hadoop-aruna-secondarynamenode-aruna.out.4
-rw-rw-r-- 1 aruna aruna     717 Feb 18 12:42 hadoop-aruna-secondarynamenode-aruna.out.5
-rw-rw-r-- 1 aruna aruna  367075 Feb 18 13:22 mapred-aruna-historyserver-aruna.log
-rw-rw-r-- 1 aruna aruna       0 Feb 18 13:15 mapred-aruna-historyserver-aruna.out
-rw-rw-r-- 1 aruna aruna       0 Feb 18 13:09 mapred-aruna-historyserver-aruna.out.1
-rw-rw-r-- 1 aruna aruna    1477 Feb 18 09:49 mapred-aruna-historyserver-aruna.out.2
-rw-rw-r-- 1 aruna aruna    1477 Feb 18 01:40 mapred-aruna-historyserver-aruna.out.3
-rw-rw-r-- 1 aruna aruna       0 Feb 17 17:01 SecurityAuth-aruna.audit
drwxr-xr-x 2 aruna aruna    4096 Feb 18 13:22 userlogs
-rw-rw-r-- 1 aruna aruna  178740 Feb 18 13:14 yarn-aruna-nodemanager-aruna.log
-rw-rw-r-- 1 aruna aruna    1508 Feb 18 13:14 yarn-aruna-nodemanager-aruna.out
-rw-rw-r-- 1 aruna aruna    1508 Feb 18 13:07 yarn-aruna-nodemanager-aruna.out.1
-rw-rw-r-- 1 aruna aruna    1515 Feb 18 12:42 yarn-aruna-nodemanager-aruna.out.2
-rw-rw-r-- 1 aruna aruna    1515 Feb 18 12:22 yarn-aruna-nodemanager-aruna.out.3
-rw-rw-r-- 1 aruna aruna    1508 Feb 18 09:49 yarn-aruna-nodemanager-aruna.out.4
-rw-rw-r-- 1 aruna aruna    1508 Feb 17 21:00 yarn-aruna-nodemanager-aruna.out.5
-rw-rw-r-- 1 aruna aruna  226371 Feb 18 13:14 yarn-aruna-resourcemanager-aruna.log
-rw-rw-r-- 1 aruna aruna    1524 Feb 18 13:14 yarn-aruna-resourcemanager-aruna.out
-rw-rw-r-- 1 aruna aruna    1524 Feb 18 13:07 yarn-aruna-resourcemanager-aruna.out.1
-rw-rw-r-- 1 aruna aruna    1531 Feb 18 12:42 yarn-aruna-resourcemanager-aruna.out.2
-rw-rw-r-- 1 aruna aruna    1531 Feb 18 12:22 yarn-aruna-resourcemanager-aruna.out.3
-rw-rw-r-- 1 aruna aruna    1524 Feb 18 09:49 yarn-aruna-resourcemanager-aruna.out.4
-rw-rw-r-- 1 aruna aruna    1524 Feb 17 21:00 yarn-aruna-resourcemanager-aruna.out.5

This is the exception i found when i tail the hadoop-aruna-datanode-aruna.log file

java.io.IOException: Incompatible clusterIDs in /home/aruna/hadoop-2.7.3/hadoop2_data/hdfs/datanode: namenode clusterID = CID-f597ff66-0d6a-4394-b038-02a4b51aa5be; datanode clusterID = CID-f4ecb1e1-ba90-4b03-a030-b8ef4e6b698f
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:775)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:300)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:416)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:395)
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:573)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
    at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
    at java.lang.Thread.run(Thread.java:745)
2017-02-18 13:16:02,585 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000. Exiting. 
java.io.IOException: All specified directories are failed to load.
    at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:574)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1362)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1327)
    at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:317)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:223)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:802)
    at java.lang.Thread.run(Thread.java:745)
2017-02-18 13:16:02,586 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000
2017-02-18 13:16:02,687 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
2017-02-18 13:16:04,687 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2017-02-18 13:16:04,690 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2017-02-18 13:16:04,692 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at aruna/127.0.1.1
************************************************************/

So what can i do for this "Incompatible clusterIDs" issue? if i reformate and start from the begining does this issue will resolve ?

avatar
Master Mentor

@Aruna Sameera

Regarding your latest error:

 Incompatible clusterIDs in /home/aruna/hadoop-2.7.3/hadoop2_data/hdfs/datanode: namenode clusterID = CID-f597ff66-0d6a-4394-b038-02a4b51aa5be; datanode clusterID = CID-f4ecb1e1-ba90-4b03-a030-b8ef4e6b698f

.

Looks like your VERSION file has different cluster IDs present in NameNode and DataNode that need to be correct. So please check.

cat <dfs.namenode.name.dir>/current/VERSION
cat <dfs.datanode.data.dir>/current/VERSION 

Hence Copy the clusterID from namenode and put it in the VERSION file of datanode and then try again.

https://community.hortonworks.com/questions/79432/datanode-goes-dows-after-few-secs-of-starting-1.ht...