Member since
06-07-2016
81
Posts
3
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1399 | 02-21-2018 07:54 AM | |
3572 | 02-21-2018 07:52 AM | |
4587 | 02-14-2018 09:30 AM | |
1939 | 10-13-2016 04:18 AM | |
11770 | 10-11-2016 08:26 AM |
08-20-2018
04:25 AM
@Tarun Parimi Hi Tarun, thanks for the reply. It helped and gave confidence. We simulated the same in development and upon confirmation we went ahead with production. I was surprised to see the namenode service on standby NN getting restarted when we run this metadataVersion command. I feel this was just an informational command & the output was quite similar to namenode format command though it takes arguement as metadataversion. Hortonworks should have improved the output descriptions instead of just being general like "Block deletion will happen in 1 hr" (atleast they can put like only corrupted block deletion). Also unsure why it is restarting the namenode service which inturn takes the "dfs.namenode.startup.delay.block.deletion.sec" parameter and displays those information.
... View more
08-17-2018
06:13 AM
Hi All, I ran a hdfs namenode metadataVersion command which inturn gave me a shocking output stating it would delete the blocks like in the below output. I ran from Standby namenode, it scheduled after an hour. Im not sure it will delete actual hdfs metadata and make a full data loss or it is just compare active namenode and will delete only unwanted blocks on standby name node? before that i ran hdfs dfsadmin -report and there are no corrupted blocks. Before 10 mins of schedule i stopped all the primary and secondary cluster services and all datanode services on datanode. Kindly help it is urgent. 1. If i start namenode still it will start that schedule for deleting blocks or not? 2. Or Before starting the service how to check and disable that schedule? output is attached. NN2 - active, NN1 - Standby $ hdfs namenode -metadataVersion
18/08/17 12:27:49 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = NN1/x.x.x.x
STARTUP_MSG: args = [-metadataVersion]
STARTUP_MSG: version = 2.7.1.2.4.0.0-169
STARTUP_MSG: classpath = /usr/hdp/2.4.0.0-169/hadoop/conf:/usr/hdp/2.4.0.0-169/hadoop/lib/jersey-json-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/spark-yarn-shuffle.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/ojdbc6.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/ranger-hdfs-plugin-shim-0.5.0.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/ranger-plugin-classloader-0.5.0.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/ranger-yarn-plugin-shim-0.5.0.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/activation-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jersey-server-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jets3t-0.9.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jettison-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/asm-3.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/avro-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/aws-java-sdk-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/azure-storage-2.2.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/paranamer-2.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-codec-1.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-collections-3.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-configuration-1.6.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/servlet-api-2.5.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-digester-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/slf4j-api-1.7.10.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-httpclient-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-io-2.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-net-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/stax-api-1.0-2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/curator-client-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/xmlenc-0.52.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/curator-framework-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/xz-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/curator-recipes-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/gson-2.2.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/guava-11.0.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/zookeeper-3.4.6.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/hamcrest-core-1.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/mockito-all-1.8.5.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/httpclient-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/httpcore-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jersey-core-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/postgresql.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-core-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jsch-0.1.42.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jsr305-3.0.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/junit-4.11.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-annotations-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-annotations.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-auth-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-auth.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-aws-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-aws.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-azure-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-azure.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-common-2.7.1.2.4.0.0-169-tests.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-common-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-common-tests.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-common.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-nfs-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-nfs.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/./:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/asm-3.2.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/netty-all-4.0.23.Final.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/okhttp-2.4.0.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/okio-1.4.0.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs-2.7.1.2.4.0.0-169-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs-nfs-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/activation-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jets3t-0.9.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jettison-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/asm-3.2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/avro-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/objenesis-2.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-collections-3.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/paranamer-2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-configuration-1.6.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-digester-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-io-2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-math3-3.1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-net-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/xmlenc-0.52.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/curator-client-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/xz-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/curator-framework-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/zookeeper-3.4.6.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/curator-recipes-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/fst-2.24.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/gson-2.2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/guava-11.0.2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/guice-3.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/zookeeper-3.4.6.2.4.0.0-169-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/httpclient-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/httpcore-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/postgresql.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-core-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/javassist-3.18.1-GA.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/javax.inject-1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jsch-0.1.42.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jsp-api-2.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jsr305-3.0.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-api-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-client-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-common-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-registry-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-common-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-tests-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-timeline-plugins-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-timeline-plugins.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/asm-3.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/guice-3.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/junit-4.11.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/xz-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//activation-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-sls-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-sls.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hamcrest-core-1.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//asm-3.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//avro-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-collections-3.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jackson-core-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-lang3-3.3.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jackson-jaxrs-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//curator-client-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jackson-xc-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//curator-framework-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//mockito-all-1.8.5.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//curator-recipes-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jettison-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jetty-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-ant-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-ant.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-archives-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//joda-time-2.9.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-archives.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-auth-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-auth.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-datajoin-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-distcp-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jsr305-3.0.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-openstack-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-extras-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//junit-4.11.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-extras.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//xz-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-gridmix-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-rumen-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-streaming-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.7.1.2.4.0.0-169-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//netty-3.6.2.Final.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//zookeeper-3.4.6.2.4.0.0-169.jar
STARTUP_MSG: build = git@github.com:hortonworks/hadoop.git -r 26104d8ac833884c8776473823007f176854f2eb; compiled by 'jenkins' on 2016-02-10T06:18Z
STARTUP_MSG: java = 1.8.0_60
************************************************************/
18/08/17 12:27:49 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
18/08/17 12:27:49 INFO namenode.NameNode: createNameNode [-metadataVersion]
18/08/17 12:27:49 WARN common.Util: Path /data0/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data1/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data2/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data3/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data4/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data5/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data6/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data7/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data8/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data9/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data10/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data11/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data0/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data1/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data2/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data3/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data4/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data5/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data6/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data7/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data8/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data9/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data10/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data11/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Storage: set restore failed storage to true
18/08/17 12:27:50 INFO namenode.FSNamesystem: Found KeyProvider: KeyProviderCryptoExtension: KMSClientProvider[http://NN2:9292/kms/v1/]
18/08/17 12:27:50 INFO namenode.FSNamesystem: Enabling async auditlog
18/08/17 12:27:50 INFO namenode.FSNamesystem: fsLock is fair:false
18/08/17 12:27:50 INFO blockmanagement.HeartbeatManager: Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval
18/08/17 12:27:50 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
18/08/17 12:27:50 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
18/08/17 12:27:50 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:01:00:00.000
18/08/17 12:27:50 INFO blockmanagement.BlockManager: The block deletion will start around 2018 Aug 17 13:27:50
18/08/17 12:27:50 INFO util.GSet: Computing capacity for map BlocksMap
18/08/17 12:27:50 INFO util.GSet: VM type = 64-bit
18/08/17 12:27:50 INFO util.GSet: 2.0% max memory 5.9 GB = 121.3 MB
18/08/17 12:27:50 INFO util.GSet: capacity = 2^24 = 16777216 entries
18/08/17 12:27:50 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=true
18/08/17 12:27:50 INFO blockmanagement.BlockManager: dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=null
18/08/17 12:27:50 INFO blockmanagement.BlockManager: defaultReplication = 3
18/08/17 12:27:50 INFO blockmanagement.BlockManager: maxReplication = 50
18/08/17 12:27:50 INFO blockmanagement.BlockManager: minReplication = 1
18/08/17 12:27:50 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
18/08/17 12:27:50 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
18/08/17 12:27:50 INFO blockmanagement.BlockManager: encryptDataTransfer = false
18/08/17 12:27:50 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
18/08/17 12:27:50 INFO namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
18/08/17 12:27:50 INFO namenode.FSNamesystem: supergroup = hdfs
18/08/17 12:27:50 INFO namenode.FSNamesystem: isPermissionEnabled = true
18/08/17 12:27:50 INFO namenode.FSNamesystem: Determined nameservice ID: eimedlcluster1
18/08/17 12:27:50 INFO namenode.FSNamesystem: HA Enabled: true
18/08/17 12:27:50 INFO namenode.FSNamesystem: Append Enabled: true
18/08/17 12:27:50 INFO util.GSet: Computing capacity for map INodeMap
18/08/17 12:27:50 INFO util.GSet: VM type = 64-bit
18/08/17 12:27:50 INFO util.GSet: 1.0% max memory 5.9 GB = 60.7 MB
18/08/17 12:27:50 INFO util.GSet: capacity = 2^23 = 8388608 entries
18/08/17 12:27:50 INFO namenode.FSDirectory: ACLs enabled? true
18/08/17 12:27:50 INFO namenode.FSDirectory: XAttrs enabled? true
18/08/17 12:27:50 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
18/08/17 12:27:50 INFO namenode.NameNode: Caching file names occuring more than 10 times
18/08/17 12:27:50 INFO util.GSet: Computing capacity for map cachedBlocks
18/08/17 12:27:50 INFO util.GSet: VM type = 64-bit
18/08/17 12:27:50 INFO util.GSet: 0.25% max memory 5.9 GB = 15.2 MB
18/08/17 12:27:50 INFO util.GSet: capacity = 2^21 = 2097152 entries
18/08/17 12:27:50 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9900000095367432
18/08/17 12:27:50 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
18/08/17 12:27:50 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
18/08/17 12:27:50 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
18/08/17 12:27:50 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
18/08/17 12:27:50 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
18/08/17 12:27:50 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
18/08/17 12:27:50 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
18/08/17 12:27:50 INFO util.GSet: Computing capacity for map NameNodeRetryCache
18/08/17 12:27:50 INFO util.GSet: VM type = 64-bit
18/08/17 12:27:50 INFO util.GSet: 0.029999999329447746% max memory 5.9 GB = 1.8 MB
18/08/17 12:27:50 INFO util.GSet: capacity = 2^18 = 262144 entries
HDFS Image Version: -63
Software format version: -63
18/08/17 12:27:50 INFO util.ExitUtil: Exiting with status 0
18/08/17 12:27:50 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at NN1/x.x.x.x
************************************************************/
... View more
Labels:
- Labels:
-
Apache Hadoop
08-16-2018
03:58 AM
@Vinicius Higa Murakami Thank you for the reply. Please find below information regarding your queries. 1. I tried below commands from NN1 (rebooted one) hdfs dfs -ls hdfs://NN2/user/ --> able get the outputs hdfs dfs -ls hdfs://NN1/user/ --> ERROR: ls: Operation category READ is not supported in state standby (is normal and expected?) 2. Yes both dfs.nameservices at hdfs-site.xml and fs.defaultFS are fine. I verified fsiimage is happening on both namenodes and size are same with timestamp. But edits file are missing on NN1(standby) from the time i have copied metadata files from NN2 and started the Namenode service. i,e From 14th Aug 17:39 onwards. I will not be able to enable DEBUG log because i cannot restart hdfs services because continuously jobs are running. Cant offord downtime now. Also im afraid namenode services will comeup or not. Below is snippet from both nodes with respect to number of files and size. Also latest fsiimage file. NN1 (STANDBY)
$ ls -l fsi*
-rw-r--r--. 1 hdfs hadoop 616714799 Aug 16 01:44 fsimage_0000000000211062321
-rw-r--r--. 1 hdfs hadoop 62 Aug 16 01:44 fsimage_0000000000211062321.md5
-rw-r--r--. 1 hdfs hadoop 619959676 Aug 16 07:45 fsimage_0000000000211102880
-rw-r--r--. 1 hdfs hadoop 62 Aug 16 07:45 fsimage_0000000000211102880.md5
NN2 (ACTIVE)
$ ls -l fsi*
-rw-r--r--. 1 hdfs hadoop 616714799 Aug 16 01:44 fsimage_0000000000211062321
-rw-r--r--. 1 hdfs hadoop 62 Aug 16 01:45 fsimage_0000000000211062321.md5
-rw-r--r--. 1 hdfs hadoop 619959676 Aug 16 07:45 fsimage_0000000000211102880
-rw-r--r--. 1 hdfs hadoop 62 Aug 16 07:45 fsimage_0000000000211102880.md5
NN1 (STANDBY)
FILE counts and SIZE
data0
-------
9064
size is: 1351 /data0/hadoop/hdfs
==================
data1
-------
9064
size is: 1351 /data1/hadoop/hdfs
==================
data2
-------
9064
size is: 1351 /data2/hadoop/hdfs
==================
data3
-------
9064
size is: 1351 /data3/hadoop/hdfs
==================
data4
-------
9064
size is: 1351 /data4/hadoop/hdfs
==================
data5
-------
9064
size is: 1351 /data5/hadoop/hdfs
==================
data6
-------
9064
size is: 1351 /data6/hadoop/hdfs
==================
data7
-------
9064
size is: 1351 /data7/hadoop/hdfs
==================
data8
-------
9064
size is: 1351 /data8/hadoop/hdfs
==================
data9
-------
9064
size is: 1351 /data9/hadoop/hdfs
==================
data10
-------
9064
size is: 1351 /data10/hadoop/hdfs
==================
data11
-------
9064
size is: 1351 /data11/hadoop/hdfs
==================
NN2 (ACTIVE)
FILE counts and SIZE
data0
-------
9504
size is: 1357 /data0/hadoop/hdfs
==================
data1
-------
9504
size is: 1356 /data1/hadoop/hdfs
==================
data2
-------
9504
size is: 1357 /data2/hadoop/hdfs
==================
data3
-------
9505
size is: 1357 /data3/hadoop/hdfs
==================
data4
-------
9505
size is: 1357 /data4/hadoop/hdfs
==================
data5
-------
9505
size is: 1357 /data5/hadoop/hdfs
==================
data6
-------
9505
size is: 1357 /data6/hadoop/hdfs
==================
data7
-------
9505
size is: 1357 /data7/hadoop/hdfs
==================
data8
-------
9505
size is: 1357 /data8/hadoop/hdfs
==================
data9
-------
9505
size is: 1357 /data9/hadoop/hdfs
==================
data10
-------
9505
size is: 1357 /data10/hadoop/hdfs
==================
data11
-------
9505
size is: 1357 /data11/hadoop/hdfs
==================
... View more
08-15-2018
07:23 AM
Hi All, I have a cluster with namenode HA on aws instances (Instance store disks). Each namenode got 12 mount points and metadata in that. And we got 4 datanodes. My standby namenode got hung due to hardware issue on aws end. We have to stop and start the instance. As this is the only solution we have done that and able to bring other services on standby namenode except namenode service because all 12 mounts dont have any metadata information. what i have done is i have tarred & restored the hadoop dir from each mount on Active working namenode to all mounts on the standby namenode. Now i'm able to start the namenode service and it became standby namenode automatically using ZKFC. But in hadoop-hdfs-namenode-<hostname>.log file im getting the below error. How to fix it and is there any harm due to this? Whether my active namenode can successfully failover to this node? Kindly help and give your suggestion to fix this. NN1 - Standby namenode (which got issue and have to stop and start) NN2 - active DN1 DN2 DN3 DN4 (have remove IP and put above naming conventions in the log below) Error snippet below. 2018-08-15 15:04:12,909 INFO namenode.EditLogInputStream (RedundantEditLogInputStream.java:nextOp(176)) - Fast-forwarding stream 'http://NN1:8480/getJournal?jid=eimedlcluster1&segmentTxId=211034589&storageInfo=-63%3A1695052906%3A0%3ACID-ce4126e2-d1f2-4233-81ec-d267f195583f, http://NN1:8480/getJournal?jid=eimedlcluster1&segmentTxId=211034589&storageInfo=-63%3A1695052906%3A0%3ACID-ce4126e2-d1f2-4233-81ec-d267f195583f' to transaction ID 211034589
2018-08-15 15:04:12,909 INFO namenode.EditLogInputStream (RedundantEditLogInputStream.java:nextOp(176)) - Fast-forwarding stream 'http://NN1:8480/getJournal?jid=eimedlcluster1&segmentTxId=211034589&storageInfo=-63%3A1695052906%3A0%3ACID-ce4126e2-d1f2-4233-81ec-d267f195583f' to transaction ID 211034589
2018-08-15 15:04:12,926 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(145)) - Edits file http://NN1/getJournal?jid=eimedlcluster1&segmentTxId=211034589&storageInfo=-63%3A1695052906%3A0%3ACID-ce4126e2-d1f2-4233-81ec-d267f195583f, http://NN1:8480/getJournal?jid=eimedlcluster1&segmentTxId=211034589&storageInfo=-63%3A1695052906%3A0%3ACID-ce4126e2-d1f2-4233-81ec-d267f195583f of size 14288 edits # 104 loaded in 0 seconds
2018-08-15 15:04:14,335 INFO ha.EditLogTailer (EditLogTailer.java:doTailEdits(238)) - Loaded 104 edits starting from txid 211034588
2018-08-15 15:04:22,552 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:04:27,970 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:04:34,710 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 25 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN4:51488 Call#101504 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2018-08-15 15:04:34,711 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 77 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN3:54288 Call#98633 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2018-08-15 15:04:34,715 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 6 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN2:57618 Call#99810 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2018-08-15 15:04:34,716 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 35 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN1:59402 Call#100406 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2018-08-15 15:04:49,013 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:05:05,799 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 54 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN3:54318 Call#98649 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2018-08-15 15:05:05,807 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 56 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN2:57630 Call#99826 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2018-08-15 15:05:05,810 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 20 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN4:51498 Call#101519 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2018-08-15 15:05:05,816 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 43 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN1:59428 Call#100422 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2018-08-15 15:05:06,229 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:05:06,246 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:05:06,942 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:05:06,945 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:05:06,954 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:05:06,974 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:05:13,011 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:05:22,543 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:05:32,988 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:05:52,160 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 44 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN4:51528 Call#101534 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2018-08-15 15:05:52,186 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 27 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN2:57658 Call#99841 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2018-08-15 15:05:53,981 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:06:06,230 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:06:06,254 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:06:06,930 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:06:06,931 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:06:06,947 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:06:06,968 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:06:08,482 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 71 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN4:51528 Call#101549 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
... View more
Labels:
- Labels:
-
Apache Hadoop
07-20-2018
07:25 AM
@Sindhu The 6 mounts are like /data0, /data1, /data2, /data3, /data4, /data5 and new mount will be like /eim_data0 & /eim_data1. Under /data0 to 5, the folder structures are same with many sub directories and final files are different. If i copy /data0 all data to /eim_data0 and when i copy /data1 to the same place then it will overwrite the existing directories as they are same. Can i do in a below way? copy /data0 to /eim_data0/data0/ copy /data1 to /eim_data0/data1/ copy /data2 to /eim_data0/data2/ & same way data3, 4, 5 to eim_data1/data3 eim_data1/data4 eim_data1/data5 and mention these 6 directories in the hdfs configurations. Will it cause any issue or is this fine?
... View more
07-19-2018
10:33 AM
@Sindhu Thank you for your reply. My case for production the data size is around 45 TB. Copy will take long time. Thought of copying online before stopping the cluster and services and assume it takes couple of days to complete and then stop the cluster & Jobs and find last two days updated files / directories and copy them to new mount points. (Have to formulate small shell script to do this). We cant stop and hold the cluster for 2 days to do the copy. Let me know if this is fine good? I will be testing the same in development and then move to production.
... View more
07-17-2018
07:24 AM
Dear All, I have a requirement to change the instance type of name and data nodes in AWS. Currently they run with instance store mount points (total 6) and we need to change to other instance type with EBS (with may be 2 mounts). Kindly let me know the brief steps which will not harm the HDFS and OS. Private IP on existing nodes in AWS can be moved to new instance types. I think of the below two options. Option 1: --------- 1. Add EBS mounts to the existing instance types and Stop the any jobs using the cluster and then copy the HDFS data to them i,e from 6 mount points to other two mount points. Set the ownership accordingly. 2. Stop the cluster 3. Change the HDFS directory type for name and data nodes 4. Start the cluster 5. If all looks good, stop the cluster services and the instance (have to be cautious because data on the old 6 mount points will be lost due to instance store storage type). Start the instance with the new instance type and attach the 2 EBS mounts. Option 2: ----------- 1. Add a new nodes to the cluster with the new instance type and EBS mounts. Rebalance the HDFS. Here new nodes will have only 2 mounts compared to 6 and i will keep the same name for the two EBS mounts. 2. This way i can add and remove 4 data nodes in my cluster. 3. But for Name node we have HA and i can add the node the cluster and move name node services, but i have got Ranger services installed which does not have move option, Also journal node service and few more. How can i overcome this or add it to new name nodes? 4. If this is done, i can decommission and remove old nodes from the cluster and assign the old private ip back to new nodes else i have a problem in firewall which will not allow access to new private ips to on-premise node. I have to request and open firewall ports for the new private ip's. I'm thinking of going with option 1 which looks easy compared to option 2. Please let me know your suggestions or your views on doing this change and also add any steps if i have missed any? Appreciate your help!!!
... View more
Labels:
- Labels:
-
Apache Hadoop
02-21-2018
07:54 AM
Hi All,
I'm able to fix the issue, you need to keep open ports 0-65535 on AWS security group side to communicate between nodes. This solved my problem. Thanks.
... View more
02-21-2018
07:52 AM
Hi All, I'm able to fix the issue, you need to keep open ports 0-65535 on AWS security group side to communicate between nodes. This solved my problem. Thanks.
... View more
02-17-2018
06:20 AM
@Sandeep Nemuri The link does not tell it needs to be open between name and data nodes or edge and name nodes etc.. For the above error im wondering what is getting missed? Anything else i can try. Thank you
... View more