Support Questions

Find answers, ask questions, and share your expertise

How to stop the namenode format scheduled

avatar
Rising Star

Hi All,

I ran a hdfs namenode metadataVersion command which inturn gave me a shocking output stating it would delete the blocks like in the below output.

I ran from Standby namenode, it scheduled after an hour. Im not sure it will delete actual hdfs metadata and make a full data loss or it is just compare active namenode and will delete only unwanted blocks on standby name node? before that i ran hdfs dfsadmin -report and there are no corrupted blocks. Before 10 mins of schedule i stopped all the primary and secondary cluster services and all datanode services on datanode. Kindly help it is urgent.

1. If i start namenode still it will start that schedule for deleting blocks or not?

2. Or Before starting the service how to check and disable that schedule?

output is attached.

NN2 - active, NN1 - Standby

$ hdfs namenode -metadataVersion
18/08/17 12:27:49 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = NN1/x.x.x.x
STARTUP_MSG:   args = [-metadataVersion]
STARTUP_MSG:   version = 2.7.1.2.4.0.0-169
STARTUP_MSG:   classpath = /usr/hdp/2.4.0.0-169/hadoop/conf:/usr/hdp/2.4.0.0-169/hadoop/lib/jersey-json-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/spark-yarn-shuffle.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/ojdbc6.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/ranger-hdfs-plugin-shim-0.5.0.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/ranger-plugin-classloader-0.5.0.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/ranger-yarn-plugin-shim-0.5.0.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/activation-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jersey-server-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jets3t-0.9.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jettison-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/asm-3.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/avro-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/aws-java-sdk-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/azure-storage-2.2.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/paranamer-2.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-codec-1.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-collections-3.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-configuration-1.6.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/servlet-api-2.5.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-digester-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/slf4j-api-1.7.10.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-httpclient-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-io-2.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-net-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/stax-api-1.0-2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/curator-client-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/xmlenc-0.52.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/curator-framework-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/xz-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/curator-recipes-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/gson-2.2.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/guava-11.0.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/zookeeper-3.4.6.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/hamcrest-core-1.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/mockito-all-1.8.5.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/httpclient-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/httpcore-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jersey-core-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/postgresql.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-core-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jsch-0.1.42.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jsr305-3.0.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/junit-4.11.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-annotations-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-annotations.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-auth-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-auth.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-aws-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-aws.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-azure-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-azure.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-common-2.7.1.2.4.0.0-169-tests.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-common-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-common-tests.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-common.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-nfs-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-nfs.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/./:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/asm-3.2.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/netty-all-4.0.23.Final.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/okhttp-2.4.0.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/okio-1.4.0.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs-2.7.1.2.4.0.0-169-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs-nfs-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/activation-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jets3t-0.9.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jettison-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/asm-3.2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/avro-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/objenesis-2.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-collections-3.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/paranamer-2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-configuration-1.6.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-digester-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-io-2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-math3-3.1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-net-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/xmlenc-0.52.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/curator-client-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/xz-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/curator-framework-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/zookeeper-3.4.6.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/curator-recipes-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/fst-2.24.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/gson-2.2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/guava-11.0.2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/guice-3.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/zookeeper-3.4.6.2.4.0.0-169-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/httpclient-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/httpcore-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/postgresql.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-core-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/javassist-3.18.1-GA.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/javax.inject-1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jsch-0.1.42.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jsp-api-2.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jsr305-3.0.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-api-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-client-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-common-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-registry-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-common-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-tests-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-timeline-plugins-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-timeline-plugins.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/asm-3.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/guice-3.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/junit-4.11.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/xz-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//activation-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-sls-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-sls.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hamcrest-core-1.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//asm-3.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//avro-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-collections-3.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jackson-core-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-lang3-3.3.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jackson-jaxrs-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//curator-client-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jackson-xc-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//curator-framework-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//mockito-all-1.8.5.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//curator-recipes-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jettison-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jetty-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-ant-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-ant.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-archives-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//joda-time-2.9.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-archives.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-auth-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-auth.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-datajoin-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-distcp-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jsr305-3.0.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-openstack-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-extras-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//junit-4.11.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-extras.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//xz-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-gridmix-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-rumen-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-streaming-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.7.1.2.4.0.0-169-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//netty-3.6.2.Final.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//zookeeper-3.4.6.2.4.0.0-169.jar
STARTUP_MSG:   build = git@github.com:hortonworks/hadoop.git -r 26104d8ac833884c8776473823007f176854f2eb; compiled by 'jenkins' on 2016-02-10T06:18Z
STARTUP_MSG:   java = 1.8.0_60
************************************************************/
18/08/17 12:27:49 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
18/08/17 12:27:49 INFO namenode.NameNode: createNameNode [-metadataVersion]
18/08/17 12:27:49 WARN common.Util: Path /data0/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data1/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data2/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data3/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data4/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data5/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data6/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data7/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data8/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data9/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data10/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data11/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data0/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data1/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data2/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data3/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data4/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data5/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data6/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data7/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data8/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data9/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data10/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data11/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Storage: set restore failed storage to true
18/08/17 12:27:50 INFO namenode.FSNamesystem: Found KeyProvider: KeyProviderCryptoExtension: KMSClientProvider[http://NN2:9292/kms/v1/]
18/08/17 12:27:50 INFO namenode.FSNamesystem: Enabling async auditlog
18/08/17 12:27:50 INFO namenode.FSNamesystem: fsLock is fair:false
18/08/17 12:27:50 INFO blockmanagement.HeartbeatManager: Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval
18/08/17 12:27:50 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
18/08/17 12:27:50 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
18/08/17 12:27:50 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:01:00:00.000
18/08/17 12:27:50 INFO blockmanagement.BlockManager: The block deletion will start around 2018 Aug 17 13:27:50
18/08/17 12:27:50 INFO util.GSet: Computing capacity for map BlocksMap
18/08/17 12:27:50 INFO util.GSet: VM type       = 64-bit
18/08/17 12:27:50 INFO util.GSet: 2.0% max memory 5.9 GB = 121.3 MB
18/08/17 12:27:50 INFO util.GSet: capacity      = 2^24 = 16777216 entries
18/08/17 12:27:50 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=true
18/08/17 12:27:50 INFO blockmanagement.BlockManager: dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=null
18/08/17 12:27:50 INFO blockmanagement.BlockManager: defaultReplication         = 3
18/08/17 12:27:50 INFO blockmanagement.BlockManager: maxReplication             = 50
18/08/17 12:27:50 INFO blockmanagement.BlockManager: minReplication             = 1
18/08/17 12:27:50 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
18/08/17 12:27:50 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
18/08/17 12:27:50 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
18/08/17 12:27:50 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
18/08/17 12:27:50 INFO namenode.FSNamesystem: fsOwner             = hdfs (auth:SIMPLE)
18/08/17 12:27:50 INFO namenode.FSNamesystem: supergroup          = hdfs
18/08/17 12:27:50 INFO namenode.FSNamesystem: isPermissionEnabled = true
18/08/17 12:27:50 INFO namenode.FSNamesystem: Determined nameservice ID: eimedlcluster1
18/08/17 12:27:50 INFO namenode.FSNamesystem: HA Enabled: true
18/08/17 12:27:50 INFO namenode.FSNamesystem: Append Enabled: true
18/08/17 12:27:50 INFO util.GSet: Computing capacity for map INodeMap
18/08/17 12:27:50 INFO util.GSet: VM type       = 64-bit
18/08/17 12:27:50 INFO util.GSet: 1.0% max memory 5.9 GB = 60.7 MB
18/08/17 12:27:50 INFO util.GSet: capacity      = 2^23 = 8388608 entries
18/08/17 12:27:50 INFO namenode.FSDirectory: ACLs enabled? true
18/08/17 12:27:50 INFO namenode.FSDirectory: XAttrs enabled? true
18/08/17 12:27:50 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
18/08/17 12:27:50 INFO namenode.NameNode: Caching file names occuring more than 10 times
18/08/17 12:27:50 INFO util.GSet: Computing capacity for map cachedBlocks
18/08/17 12:27:50 INFO util.GSet: VM type       = 64-bit
18/08/17 12:27:50 INFO util.GSet: 0.25% max memory 5.9 GB = 15.2 MB
18/08/17 12:27:50 INFO util.GSet: capacity      = 2^21 = 2097152 entries
18/08/17 12:27:50 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9900000095367432
18/08/17 12:27:50 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
18/08/17 12:27:50 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
18/08/17 12:27:50 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
18/08/17 12:27:50 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
18/08/17 12:27:50 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
18/08/17 12:27:50 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
18/08/17 12:27:50 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
18/08/17 12:27:50 INFO util.GSet: Computing capacity for map NameNodeRetryCache
18/08/17 12:27:50 INFO util.GSet: VM type       = 64-bit
18/08/17 12:27:50 INFO util.GSet: 0.029999999329447746% max memory 5.9 GB = 1.8 MB
18/08/17 12:27:50 INFO util.GSet: capacity      = 2^18 = 262144 entries
HDFS Image Version: -63
Software format version: -63
18/08/17 12:27:50 INFO util.ExitUtil: Exiting with status 0
18/08/17 12:27:50 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at NN1/x.x.x.x
************************************************************/
1 ACCEPTED SOLUTION

avatar
Expert Contributor
@Muthukumar S

This is a normal log whenever the BlockManager starts up. You can even check it in the namenode logs. Invalid blocks such as overreplicated blocks if they exist will be deleted one hour after Namenode starts. No need to worry about any data loss here at all. Start your HDFS service as usual.

View solution in original post

2 REPLIES 2

avatar
Expert Contributor
@Muthukumar S

This is a normal log whenever the BlockManager starts up. You can even check it in the namenode logs. Invalid blocks such as overreplicated blocks if they exist will be deleted one hour after Namenode starts. No need to worry about any data loss here at all. Start your HDFS service as usual.

avatar
Rising Star

@Tarun Parimi

Hi Tarun, thanks for the reply. It helped and gave confidence. We simulated the same in development and upon confirmation we went ahead with production.

I was surprised to see the namenode service on standby NN getting restarted when we run this metadataVersion command. I feel this was just an informational command & the output was quite similar to namenode format command though it takes arguement as metadataversion. Hortonworks should have improved the output descriptions instead of just being general like "Block deletion will happen in 1 hr" (atleast they can put like only corrupted block deletion). Also unsure why it is restarting the namenode service which inturn takes the "dfs.namenode.startup.delay.block.deletion.sec" parameter and displays those information.