Member since
06-07-2016
81
Posts
3
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
926 | 02-21-2018 07:54 AM | |
1959 | 02-21-2018 07:52 AM | |
3008 | 02-14-2018 09:30 AM | |
1095 | 10-13-2016 04:18 AM | |
3175 | 10-11-2016 08:26 AM |
08-20-2018
04:25 AM
@Tarun Parimi Hi Tarun, thanks for the reply. It helped and gave confidence. We simulated the same in development and upon confirmation we went ahead with production. I was surprised to see the namenode service on standby NN getting restarted when we run this metadataVersion command. I feel this was just an informational command & the output was quite similar to namenode format command though it takes arguement as metadataversion. Hortonworks should have improved the output descriptions instead of just being general like "Block deletion will happen in 1 hr" (atleast they can put like only corrupted block deletion). Also unsure why it is restarting the namenode service which inturn takes the "dfs.namenode.startup.delay.block.deletion.sec" parameter and displays those information.
... View more
08-17-2018
06:13 AM
Hi All, I ran a hdfs namenode metadataVersion command which inturn gave me a shocking output stating it would delete the blocks like in the below output. I ran from Standby namenode, it scheduled after an hour. Im not sure it will delete actual hdfs metadata and make a full data loss or it is just compare active namenode and will delete only unwanted blocks on standby name node? before that i ran hdfs dfsadmin -report and there are no corrupted blocks. Before 10 mins of schedule i stopped all the primary and secondary cluster services and all datanode services on datanode. Kindly help it is urgent. 1. If i start namenode still it will start that schedule for deleting blocks or not? 2. Or Before starting the service how to check and disable that schedule? output is attached. NN2 - active, NN1 - Standby $ hdfs namenode -metadataVersion
18/08/17 12:27:49 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = NN1/x.x.x.x
STARTUP_MSG: args = [-metadataVersion]
STARTUP_MSG: version = 2.7.1.2.4.0.0-169
STARTUP_MSG: classpath = /usr/hdp/2.4.0.0-169/hadoop/conf:/usr/hdp/2.4.0.0-169/hadoop/lib/jersey-json-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/spark-yarn-shuffle.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/ojdbc6.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/ranger-hdfs-plugin-shim-0.5.0.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/ranger-plugin-classloader-0.5.0.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/ranger-yarn-plugin-shim-0.5.0.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/activation-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jersey-server-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jets3t-0.9.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jettison-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/asm-3.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/avro-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/aws-java-sdk-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/azure-storage-2.2.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/paranamer-2.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-cli-1.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-codec-1.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-collections-3.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-compress-1.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-configuration-1.6.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/servlet-api-2.5.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-digester-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/slf4j-api-1.7.10.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-httpclient-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-io-2.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-lang-2.6.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/slf4j-log4j12-1.7.10.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/commons-net-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/stax-api-1.0-2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/curator-client-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/xmlenc-0.52.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/curator-framework-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/xz-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/curator-recipes-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/gson-2.2.4.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/guava-11.0.2.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/zookeeper-3.4.6.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/hamcrest-core-1.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/mockito-all-1.8.5.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/httpclient-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/httpcore-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jersey-core-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/postgresql.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-core-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jsch-0.1.42.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jsp-api-2.1.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/jsr305-3.0.0.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/junit-4.11.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/2.4.0.0-169/hadoop/lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-annotations-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-annotations.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-auth-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-auth.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-aws-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-aws.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-azure-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-azure.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-common-2.7.1.2.4.0.0-169-tests.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-common-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-common-tests.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-common.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-nfs-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop/.//hadoop-nfs.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/./:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/asm-3.2.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-cli-1.2.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-codec-1.4.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-io-2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-lang-2.6.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/guava-11.0.2.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jersey-core-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jersey-server-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/jsr305-3.0.0.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/log4j-1.2.17.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/netty-all-4.0.23.Final.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/okhttp-2.4.0.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/okio-1.4.0.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/servlet-api-2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/xercesImpl-2.9.1.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/xml-apis-1.3.04.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/lib/xmlenc-0.52.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs-2.7.1.2.4.0.0-169-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs-nfs-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-hdfs/.//hadoop-hdfs.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/activation-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/aopalliance-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jets3t-0.9.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jettison-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/log4j-1.2.17.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/api-util-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/asm-3.2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/avro-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/objenesis-2.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-beanutils-1.7.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-beanutils-core-1.8.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-cli-1.2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-codec-1.4.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jersey-client-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-collections-3.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/paranamer-2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-compress-1.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jersey-core-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-configuration-1.6.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-digester-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/servlet-api-2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-httpclient-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-io-2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-lang-2.6.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-logging-1.1.3.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/stax-api-1.0-2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-math3-3.1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/commons-net-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/xmlenc-0.52.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/curator-client-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/xz-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/curator-framework-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/zookeeper-3.4.6.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/curator-recipes-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/fst-2.24.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/gson-2.2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/guava-11.0.2.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/guice-3.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/zookeeper-3.4.6.2.4.0.0-169-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/guice-servlet-3.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jersey-guice-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/httpclient-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/httpcore-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jersey-json-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/postgresql.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-core-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-databind-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jackson-xc-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/java-xmlbuilder-0.4.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/javassist-3.18.1-GA.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/javax.inject-1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jersey-server-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jetty-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jsch-0.1.42.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jsp-api-2.1.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/jsr305-3.0.0.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-api-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-api.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-client-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-client.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-common-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-common.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-registry-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-registry.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-common-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-common.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-sharedcachemanager.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-tests-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-timeline-plugins-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-timeline-plugins.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/aopalliance-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/asm-3.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/avro-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/commons-io-2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/guice-3.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/javax.inject-1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/jersey-core-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/jersey-server-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/junit-4.11.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/log4j-1.2.17.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/paranamer-2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/lib/xz-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//activation-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-sls-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-sls.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hamcrest-core-1.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//log4j-1.2.17.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//asm-3.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//avro-1.7.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-streaming.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jersey-core-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-cli-1.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//httpcore-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-codec-1.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-collections-3.2.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//htrace-core-3.1.0-incubating.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-configuration-1.6.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-digester-1.8.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jackson-core-2.2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jersey-json-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-io-2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jersey-server-1.9.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-lang-2.6.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jackson-core-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-lang3-3.3.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jackson-jaxrs-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jets3t-0.9.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//commons-net-3.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jackson-mapper-asl-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//curator-client-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jackson-xc-1.9.13.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//curator-framework-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//mockito-all-1.8.5.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//curator-recipes-2.7.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jettison-1.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//gson-2.2.4.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jetty-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//guava-11.0.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-ant-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jetty-util-6.1.26.hwx.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-ant.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-archives-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//joda-time-2.9.2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-archives.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-auth-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jsch-0.1.42.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-auth.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-datajoin-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jsp-api-2.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-datajoin.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-rumen.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-distcp-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//jsr305-3.0.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-distcp.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-openstack-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-extras-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//junit-4.11.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-extras.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-openstack.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//xz-1.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-gridmix-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-gridmix.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-rumen-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-streaming-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.7.1.2.4.0.0-169-tests.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.7.1.2.4.0.0-169.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//microsoft-windowsazure-storage-sdk-0.6.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//netty-3.6.2.Final.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//paranamer-2.3.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//servlet-api-2.5.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//stax-api-1.0-2.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//xmlenc-0.52.jar:/usr/hdp/2.4.0.0-169/hadoop-mapreduce/.//zookeeper-3.4.6.2.4.0.0-169.jar
STARTUP_MSG: build = git@github.com:hortonworks/hadoop.git -r 26104d8ac833884c8776473823007f176854f2eb; compiled by 'jenkins' on 2016-02-10T06:18Z
STARTUP_MSG: java = 1.8.0_60
************************************************************/
18/08/17 12:27:49 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
18/08/17 12:27:49 INFO namenode.NameNode: createNameNode [-metadataVersion]
18/08/17 12:27:49 WARN common.Util: Path /data0/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data1/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data2/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data3/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data4/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data5/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data6/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data7/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data8/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data9/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data10/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data11/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data0/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data1/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data2/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data3/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data4/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data5/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data6/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data7/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data8/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data9/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data10/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Util: Path /data11/hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
18/08/17 12:27:49 WARN common.Storage: set restore failed storage to true
18/08/17 12:27:50 INFO namenode.FSNamesystem: Found KeyProvider: KeyProviderCryptoExtension: KMSClientProvider[http://NN2:9292/kms/v1/]
18/08/17 12:27:50 INFO namenode.FSNamesystem: Enabling async auditlog
18/08/17 12:27:50 INFO namenode.FSNamesystem: fsLock is fair:false
18/08/17 12:27:50 INFO blockmanagement.HeartbeatManager: Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval
18/08/17 12:27:50 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
18/08/17 12:27:50 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
18/08/17 12:27:50 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:01:00:00.000
18/08/17 12:27:50 INFO blockmanagement.BlockManager: The block deletion will start around 2018 Aug 17 13:27:50
18/08/17 12:27:50 INFO util.GSet: Computing capacity for map BlocksMap
18/08/17 12:27:50 INFO util.GSet: VM type = 64-bit
18/08/17 12:27:50 INFO util.GSet: 2.0% max memory 5.9 GB = 121.3 MB
18/08/17 12:27:50 INFO util.GSet: capacity = 2^24 = 16777216 entries
18/08/17 12:27:50 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=true
18/08/17 12:27:50 INFO blockmanagement.BlockManager: dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=null
18/08/17 12:27:50 INFO blockmanagement.BlockManager: defaultReplication = 3
18/08/17 12:27:50 INFO blockmanagement.BlockManager: maxReplication = 50
18/08/17 12:27:50 INFO blockmanagement.BlockManager: minReplication = 1
18/08/17 12:27:50 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
18/08/17 12:27:50 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
18/08/17 12:27:50 INFO blockmanagement.BlockManager: encryptDataTransfer = false
18/08/17 12:27:50 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
18/08/17 12:27:50 INFO namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
18/08/17 12:27:50 INFO namenode.FSNamesystem: supergroup = hdfs
18/08/17 12:27:50 INFO namenode.FSNamesystem: isPermissionEnabled = true
18/08/17 12:27:50 INFO namenode.FSNamesystem: Determined nameservice ID: eimedlcluster1
18/08/17 12:27:50 INFO namenode.FSNamesystem: HA Enabled: true
18/08/17 12:27:50 INFO namenode.FSNamesystem: Append Enabled: true
18/08/17 12:27:50 INFO util.GSet: Computing capacity for map INodeMap
18/08/17 12:27:50 INFO util.GSet: VM type = 64-bit
18/08/17 12:27:50 INFO util.GSet: 1.0% max memory 5.9 GB = 60.7 MB
18/08/17 12:27:50 INFO util.GSet: capacity = 2^23 = 8388608 entries
18/08/17 12:27:50 INFO namenode.FSDirectory: ACLs enabled? true
18/08/17 12:27:50 INFO namenode.FSDirectory: XAttrs enabled? true
18/08/17 12:27:50 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
18/08/17 12:27:50 INFO namenode.NameNode: Caching file names occuring more than 10 times
18/08/17 12:27:50 INFO util.GSet: Computing capacity for map cachedBlocks
18/08/17 12:27:50 INFO util.GSet: VM type = 64-bit
18/08/17 12:27:50 INFO util.GSet: 0.25% max memory 5.9 GB = 15.2 MB
18/08/17 12:27:50 INFO util.GSet: capacity = 2^21 = 2097152 entries
18/08/17 12:27:50 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9900000095367432
18/08/17 12:27:50 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
18/08/17 12:27:50 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
18/08/17 12:27:50 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
18/08/17 12:27:50 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
18/08/17 12:27:50 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
18/08/17 12:27:50 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
18/08/17 12:27:50 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
18/08/17 12:27:50 INFO util.GSet: Computing capacity for map NameNodeRetryCache
18/08/17 12:27:50 INFO util.GSet: VM type = 64-bit
18/08/17 12:27:50 INFO util.GSet: 0.029999999329447746% max memory 5.9 GB = 1.8 MB
18/08/17 12:27:50 INFO util.GSet: capacity = 2^18 = 262144 entries
HDFS Image Version: -63
Software format version: -63
18/08/17 12:27:50 INFO util.ExitUtil: Exiting with status 0
18/08/17 12:27:50 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at NN1/x.x.x.x
************************************************************/
... View more
Labels:
- Labels:
-
Apache Hadoop
08-16-2018
03:58 AM
@Vinicius Higa Murakami Thank you for the reply. Please find below information regarding your queries. 1. I tried below commands from NN1 (rebooted one) hdfs dfs -ls hdfs://NN2/user/ --> able get the outputs hdfs dfs -ls hdfs://NN1/user/ --> ERROR: ls: Operation category READ is not supported in state standby (is normal and expected?) 2. Yes both dfs.nameservices at hdfs-site.xml and fs.defaultFS are fine. I verified fsiimage is happening on both namenodes and size are same with timestamp. But edits file are missing on NN1(standby) from the time i have copied metadata files from NN2 and started the Namenode service. i,e From 14th Aug 17:39 onwards. I will not be able to enable DEBUG log because i cannot restart hdfs services because continuously jobs are running. Cant offord downtime now. Also im afraid namenode services will comeup or not. Below is snippet from both nodes with respect to number of files and size. Also latest fsiimage file. NN1 (STANDBY)
$ ls -l fsi*
-rw-r--r--. 1 hdfs hadoop 616714799 Aug 16 01:44 fsimage_0000000000211062321
-rw-r--r--. 1 hdfs hadoop 62 Aug 16 01:44 fsimage_0000000000211062321.md5
-rw-r--r--. 1 hdfs hadoop 619959676 Aug 16 07:45 fsimage_0000000000211102880
-rw-r--r--. 1 hdfs hadoop 62 Aug 16 07:45 fsimage_0000000000211102880.md5
NN2 (ACTIVE)
$ ls -l fsi*
-rw-r--r--. 1 hdfs hadoop 616714799 Aug 16 01:44 fsimage_0000000000211062321
-rw-r--r--. 1 hdfs hadoop 62 Aug 16 01:45 fsimage_0000000000211062321.md5
-rw-r--r--. 1 hdfs hadoop 619959676 Aug 16 07:45 fsimage_0000000000211102880
-rw-r--r--. 1 hdfs hadoop 62 Aug 16 07:45 fsimage_0000000000211102880.md5
NN1 (STANDBY)
FILE counts and SIZE
data0
-------
9064
size is: 1351 /data0/hadoop/hdfs
==================
data1
-------
9064
size is: 1351 /data1/hadoop/hdfs
==================
data2
-------
9064
size is: 1351 /data2/hadoop/hdfs
==================
data3
-------
9064
size is: 1351 /data3/hadoop/hdfs
==================
data4
-------
9064
size is: 1351 /data4/hadoop/hdfs
==================
data5
-------
9064
size is: 1351 /data5/hadoop/hdfs
==================
data6
-------
9064
size is: 1351 /data6/hadoop/hdfs
==================
data7
-------
9064
size is: 1351 /data7/hadoop/hdfs
==================
data8
-------
9064
size is: 1351 /data8/hadoop/hdfs
==================
data9
-------
9064
size is: 1351 /data9/hadoop/hdfs
==================
data10
-------
9064
size is: 1351 /data10/hadoop/hdfs
==================
data11
-------
9064
size is: 1351 /data11/hadoop/hdfs
==================
NN2 (ACTIVE)
FILE counts and SIZE
data0
-------
9504
size is: 1357 /data0/hadoop/hdfs
==================
data1
-------
9504
size is: 1356 /data1/hadoop/hdfs
==================
data2
-------
9504
size is: 1357 /data2/hadoop/hdfs
==================
data3
-------
9505
size is: 1357 /data3/hadoop/hdfs
==================
data4
-------
9505
size is: 1357 /data4/hadoop/hdfs
==================
data5
-------
9505
size is: 1357 /data5/hadoop/hdfs
==================
data6
-------
9505
size is: 1357 /data6/hadoop/hdfs
==================
data7
-------
9505
size is: 1357 /data7/hadoop/hdfs
==================
data8
-------
9505
size is: 1357 /data8/hadoop/hdfs
==================
data9
-------
9505
size is: 1357 /data9/hadoop/hdfs
==================
data10
-------
9505
size is: 1357 /data10/hadoop/hdfs
==================
data11
-------
9505
size is: 1357 /data11/hadoop/hdfs
==================
... View more
08-15-2018
07:23 AM
Hi All, I have a cluster with namenode HA on aws instances (Instance store disks). Each namenode got 12 mount points and metadata in that. And we got 4 datanodes. My standby namenode got hung due to hardware issue on aws end. We have to stop and start the instance. As this is the only solution we have done that and able to bring other services on standby namenode except namenode service because all 12 mounts dont have any metadata information. what i have done is i have tarred & restored the hadoop dir from each mount on Active working namenode to all mounts on the standby namenode. Now i'm able to start the namenode service and it became standby namenode automatically using ZKFC. But in hadoop-hdfs-namenode-<hostname>.log file im getting the below error. How to fix it and is there any harm due to this? Whether my active namenode can successfully failover to this node? Kindly help and give your suggestion to fix this. NN1 - Standby namenode (which got issue and have to stop and start) NN2 - active DN1 DN2 DN3 DN4 (have remove IP and put above naming conventions in the log below) Error snippet below. 2018-08-15 15:04:12,909 INFO namenode.EditLogInputStream (RedundantEditLogInputStream.java:nextOp(176)) - Fast-forwarding stream 'http://NN1:8480/getJournal?jid=eimedlcluster1&segmentTxId=211034589&storageInfo=-63%3A1695052906%3A0%3ACID-ce4126e2-d1f2-4233-81ec-d267f195583f, http://NN1:8480/getJournal?jid=eimedlcluster1&segmentTxId=211034589&storageInfo=-63%3A1695052906%3A0%3ACID-ce4126e2-d1f2-4233-81ec-d267f195583f' to transaction ID 211034589
2018-08-15 15:04:12,909 INFO namenode.EditLogInputStream (RedundantEditLogInputStream.java:nextOp(176)) - Fast-forwarding stream 'http://NN1:8480/getJournal?jid=eimedlcluster1&segmentTxId=211034589&storageInfo=-63%3A1695052906%3A0%3ACID-ce4126e2-d1f2-4233-81ec-d267f195583f' to transaction ID 211034589
2018-08-15 15:04:12,926 INFO namenode.FSImage (FSEditLogLoader.java:loadFSEdits(145)) - Edits file http://NN1/getJournal?jid=eimedlcluster1&segmentTxId=211034589&storageInfo=-63%3A1695052906%3A0%3ACID-ce4126e2-d1f2-4233-81ec-d267f195583f, http://NN1:8480/getJournal?jid=eimedlcluster1&segmentTxId=211034589&storageInfo=-63%3A1695052906%3A0%3ACID-ce4126e2-d1f2-4233-81ec-d267f195583f of size 14288 edits # 104 loaded in 0 seconds
2018-08-15 15:04:14,335 INFO ha.EditLogTailer (EditLogTailer.java:doTailEdits(238)) - Loaded 104 edits starting from txid 211034588
2018-08-15 15:04:22,552 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:04:27,970 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:04:34,710 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 25 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN4:51488 Call#101504 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2018-08-15 15:04:34,711 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 77 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN3:54288 Call#98633 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2018-08-15 15:04:34,715 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 6 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN2:57618 Call#99810 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2018-08-15 15:04:34,716 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 35 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN1:59402 Call#100406 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2018-08-15 15:04:49,013 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:05:05,799 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 54 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN3:54318 Call#98649 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2018-08-15 15:05:05,807 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 56 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN2:57630 Call#99826 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2018-08-15 15:05:05,810 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 20 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN4:51498 Call#101519 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2018-08-15 15:05:05,816 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 43 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN1:59428 Call#100422 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2018-08-15 15:05:06,229 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:05:06,246 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:05:06,942 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:05:06,945 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:05:06,954 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:05:06,974 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:05:13,011 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:05:22,543 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:05:32,988 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:05:52,160 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 44 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN4:51528 Call#101534 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2018-08-15 15:05:52,186 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 27 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN2:57658 Call#99841 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
2018-08-15 15:05:53,981 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:06:06,230 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:06:06,254 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:06:06,930 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:06:06,931 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:06:06,947 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:06:06,968 WARN namenode.FSNamesystem (FSNamesystem.java:getCorruptFiles(7324)) - Get corrupt file blocks returned error: Operation category READ is not supported in state standby
2018-08-15 15:06:08,482 INFO ipc.Server (Server.java:run(2165)) - IPC Server handler 71 on 8020, call org.apache.hadoop.hdfs.protocol.ClientProtocol.getFileInfo from DN4:51528 Call#101549 Retry#0: org.apache.hadoop.ipc.StandbyException: Operation category READ is not supported in state standby
... View more
Labels:
- Labels:
-
Apache Hadoop
07-20-2018
07:25 AM
@Sindhu The 6 mounts are like /data0, /data1, /data2, /data3, /data4, /data5 and new mount will be like /eim_data0 & /eim_data1. Under /data0 to 5, the folder structures are same with many sub directories and final files are different. If i copy /data0 all data to /eim_data0 and when i copy /data1 to the same place then it will overwrite the existing directories as they are same. Can i do in a below way? copy /data0 to /eim_data0/data0/ copy /data1 to /eim_data0/data1/ copy /data2 to /eim_data0/data2/ & same way data3, 4, 5 to eim_data1/data3 eim_data1/data4 eim_data1/data5 and mention these 6 directories in the hdfs configurations. Will it cause any issue or is this fine?
... View more
07-19-2018
10:33 AM
@Sindhu Thank you for your reply. My case for production the data size is around 45 TB. Copy will take long time. Thought of copying online before stopping the cluster and services and assume it takes couple of days to complete and then stop the cluster & Jobs and find last two days updated files / directories and copy them to new mount points. (Have to formulate small shell script to do this). We cant stop and hold the cluster for 2 days to do the copy. Let me know if this is fine good? I will be testing the same in development and then move to production.
... View more
07-17-2018
07:24 AM
Dear All, I have a requirement to change the instance type of name and data nodes in AWS. Currently they run with instance store mount points (total 6) and we need to change to other instance type with EBS (with may be 2 mounts). Kindly let me know the brief steps which will not harm the HDFS and OS. Private IP on existing nodes in AWS can be moved to new instance types. I think of the below two options. Option 1: --------- 1. Add EBS mounts to the existing instance types and Stop the any jobs using the cluster and then copy the HDFS data to them i,e from 6 mount points to other two mount points. Set the ownership accordingly. 2. Stop the cluster 3. Change the HDFS directory type for name and data nodes 4. Start the cluster 5. If all looks good, stop the cluster services and the instance (have to be cautious because data on the old 6 mount points will be lost due to instance store storage type). Start the instance with the new instance type and attach the 2 EBS mounts. Option 2: ----------- 1. Add a new nodes to the cluster with the new instance type and EBS mounts. Rebalance the HDFS. Here new nodes will have only 2 mounts compared to 6 and i will keep the same name for the two EBS mounts. 2. This way i can add and remove 4 data nodes in my cluster. 3. But for Name node we have HA and i can add the node the cluster and move name node services, but i have got Ranger services installed which does not have move option, Also journal node service and few more. How can i overcome this or add it to new name nodes? 4. If this is done, i can decommission and remove old nodes from the cluster and assign the old private ip back to new nodes else i have a problem in firewall which will not allow access to new private ips to on-premise node. I have to request and open firewall ports for the new private ip's. I'm thinking of going with option 1 which looks easy compared to option 2. Please let me know your suggestions or your views on doing this change and also add any steps if i have missed any? Appreciate your help!!!
... View more
Labels:
- Labels:
-
Apache Hadoop
02-21-2018
07:54 AM
Hi All,
I'm able to fix the issue, you need to keep open ports 0-65535 on AWS security group side to communicate between nodes. This solved my problem. Thanks.
... View more
02-21-2018
07:52 AM
Hi All, I'm able to fix the issue, you need to keep open ports 0-65535 on AWS security group side to communicate between nodes. This solved my problem. Thanks.
... View more
02-17-2018
06:20 AM
@Sandeep Nemuri The link does not tell it needs to be open between name and data nodes or edge and name nodes etc.. For the above error im wondering what is getting missed? Anything else i can try. Thank you
... View more
02-15-2018
08:11 AM
Dear All, I did a new installation of 2.6.2.0 with one name and 2 data nodes. History server service is not getting started due to the below error. Both datanode services are started and running. Also because of this issue, I'm not able to copy any files to hdfs because datanode is not detected and no information is passed namenode. Looks like network issue between name and data nodes. " { "RemoteException": { "exception": "IOException", "javaClassName": "java.io.IOException", "message": "Failed to find datanode, suggest to check cluster health. excludeDatanodes=null" } } " $ hadoop fs -copyFromLocal /tmp/test/ambari.repo /test/ 18/02/15 15:12:37 WARN hdfs.DFSClient: DataStreamer Exception org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /test/ambari.repo._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation. at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1709) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getNewBlockTargets(FSNamesystem.java:3337) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3261) at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:850) at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:504) at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java) $ hdfs dfsadmin -report Configured Capacity: 0 (0 B) Present Capacity: 0 (0 B) DFS Remaining: 0 (0 B) DFS Used: 0 (0 B) DFS Used%: NaN% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 ------------------------------------------------- Solutions I have tried, 1. Verified /etc/hosts file and did dns lookup from edge,name & datanodes to all other nodes and it is resolving properly. 2. Added in hdfs-site.xml the below entries and restarted the services. dfs.client.use.datanode.hostname=true dfs.datanode.use.datanode.hostname=true dfs.namenode.datanode.registration.ip-hostname-check=false 3. 50010 port is open on datanodes 4. 50070 open on namenodes 5. Did a clean reboot of all nodes and services. Still issue remains the same? On hortonworks links they gave just port numbers. Just want to know what port should be opened on name and data nodes and which node will access that? This environment is on AWS and I would need specify the destination host which access this port for communication. Appreciate your help. Thank you.
... View more
Labels:
- Labels:
-
Apache Hadoop
02-14-2018
09:30 AM
Hi All, I'm closing this thread. Looks like some error with the repository on 2.6.4.0 version. I have cleaned up and installed 2.6.2.0 version and it went fine, but have some error in starting few services like history server and I can fix that with the information in the logs. Thank you.
... View more
02-14-2018
09:27 AM
Dear All, I have setup a new HDP cluster with 2.6.2.0 version and few services are not starting due to below errors. This is a new setup. History Server Error " raise WebHDFSCallException(err_msg, result_dict) resource_management.libraries.providers.hdfs_resource.WebHDFSCallException: Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary @/usr/hdp/2.6.2.0-205/hadoop/mapreduce.tar.gz -H 'Content-Type: application/octet-stream' 'http://ip-172-29-1-250.ap-southeast-1.compute.internal:50070/webhdfs/v1/hdp/apps/2.6.2.0-205/mapreduce/mapreduce.tar.gz?op=CREATE&user.name=hdfs&overwrite=True&permission=444'' returned status_code=403. { "RemoteException": { "exception": "IOException", "javaClassName": "java.io.IOException", "message": "Failed to find datanode, suggest to check cluster health. excludeDatanodes=null" } } " NOTE: Datanode services are started and running fine, /etc/hosts are fine and hostname -f resolves the correct name. I tried to run HDFS service check and ended up with the same error. " resource_management.libraries.providers.hdfs_resource.WebHDFSCallException: Execution of 'curl -sS -L -w '%{http_code}' -X PUT --data-binary @/etc/passwd -H 'Content-Type: application/octet-stream' 'http://ip-172-29-1-250.ap-southeast-1.compute.internal:50070/webhdfs/v1/tmp/id1dacfa01_date571418?op=CREATE&user.name=hdfs&overwrite=True'' returned status_code=403. { "RemoteException": { "exception": "IOException", "javaClassName": "java.io.IOException", "message": "Failed to find datanode, suggest to check cluster health. excludeDatanodes=null" } } "
Ambari Metrics Collector and Resource manager are getting started and randomly coming down in some mins. Appreciate your help.
... View more
Labels:
02-14-2018
02:15 AM
@Zill Silveira Repo is fine and manually I'm able to install storm-slider client. You know all packages installed, only it is throwing error on install,start & Test portion of the ambari console for new cluster build. Also it is new build and first time, hence there is no stack & versions on the console. I have gone through links which tells 2.6.2.0 will work, whether I can directly try that version or need to clean up all packages manually which is pretty tough? Or trying 2.6.2.0 will cleanup and will do the installation? IF no got any steps to cleanup for these issues? Thank you in advance. $ yum install storm-slider-client Loaded plugins: amazon-id, rhui-lb, search-disabled-repos HDP-2.6-GPL-repo-1 | 2.9 kB 00:00:00 HDP-2.6-repo-1 | 2.9 kB 00:00:00 HDP-UTILS-1.1.0.22-repo-1 | 2.9 kB 00:00:00 ambari-2.6.1.0 | 2.9 kB 00:00:00 rhui-REGION-client-config-server-7 | 2.9 kB 00:00:00 rhui-REGION-rhel-server-releases | 3.5 kB 00:00:00 rhui-REGION-rhel-server-rh-common | 3.8 kB 00:00:00 Resolving Dependencies There are unfinished transactions remaining. You might consider running yum-complete-transaction, or "yum-complete-transaction --cleanup-only" and "yum history redo last", first to finish them. If those don't work you'll have to try removing/installing packages by hand (maybe package-cleanup can help). --> Running transaction check ---> Package storm-slider-client.noarch 0:1.1.0.2.6.4.0-91 will be installed --> Processing Dependency: storm_2_6_4_0_91-slider-client for package: storm-slider-client-1.1.0.2.6.4.0-91.noarch --> Running transaction check ---> Package storm_2_6_4_0_91-slider-client.x86_64 0:1.1.0.2.6.4.0-91 will be installed --> Processing Dependency: slider_2_6_4_0_91 for package: storm_2_6_4_0_91-slider-client-1.1.0.2.6.4.0-91.x86_64 --> Running transaction check ---> Package slider_2_6_4_0_91.noarch 0:0.92.0.2.6.4.0-91 will be installed --> Finished Dependency Resolution Dependencies Resolved ================================================================================================================== Package Arch Version Repository Size ================================================================================================================== Installing: storm-slider-client noarch 1.1.0.2.6.4.0-91 HDP-2.6-repo-1 2.6 k Installing for dependencies: slider_2_6_4_0_91 noarch 0.92.0.2.6.4.0-91 HDP-2.6-repo-1 91 M storm_2_6_4_0_91-slider-client x86_64 1.1.0.2.6.4.0-91 HDP-2.6-repo-1 135 M Transaction Summary ================================================================================================================== Install 1 Package (+2 Dependent packages) Total download size: 225 M Installed size: 249 M Is this ok [y/d/N]: N Exiting on user command Your transaction was saved, rerun it with: yum load-transaction /tmp/yum_save_tx.2018-02-14.10-10.5Ai0ZC.yumtx [root@eim-preprod-namenode-1]:/etc/yum.repos.d $ ls -l total 44 -rw-r--r--. 1 root root 463 Feb 13 18:08 ambari-hdp-1.repo -rw-r--r--. 1 root root 306 Feb 13 10:37 ambari.repo -rw-r--r--. 1 root root 607 Jan 18 18:37 redhat-rhui-client-config.repo -rw-r--r--. 1 root root 8679 Jan 18 18:37 redhat-rhui.repo -rw-r--r--. 1 root root 90 Jan 18 18:37 rhui-load-balancers.conf -rw-r--r--. 1 root root 14656 Jul 4 2014 snappy-devel-1.1.0-3.el7.x86_64.rpm [root@eim-preprod-namenode-1]:/etc/yum.repos.d $ cat ambari-hdp-1.repo [HDP-2.6-repo-1] name=HDP-2.6-repo-1 baseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.4.0 path=/ enabled=1 gpgcheck=0 [HDP-2.6-GPL-repo-1] name=HDP-2.6-GPL-repo-1 baseurl=http://public-repo-1.hortonworks.com/HDP-GPL/centos7/2.x/updates/2.6.4.0 path=/ enabled=1 gpgcheck=0 [HDP-UTILS-1.1.0.22-repo-1] name=HDP-UTILS-1.1.0.22-repo-1 baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7 path=/ enabled=1
... View more
02-14-2018
12:56 AM
@Zill Silveira For the new version 2.6.4.0, it downloads ambari-hdp-1.repo file which got 3 urls for hdp, jenkins key and hdp utils. I have posted more details with the issue in the below link. That got all details. Have a look. https://community.hortonworks.com/questions/172034/hdp-2640-cluster-creation-is-getting-failed-due-to.html
... View more
02-13-2018
09:54 AM
@Girish Khole Have you found a solution for this issue? I'm too stuck on the same issue and stuck. Appreciate your reply.
... View more
02-13-2018
09:45 AM
@Lukas Muller: I'm too stuck with the same issue while i try to install 2.6.4.0, Actually in backend it does install everything and breaks at the same point like yours. Just want to know how to you revert to 2.6.2.0? I mean did you do a manual clean up or directly tried 2.6.2.0 and it cleaned up and installs the 2.6.2.0 version? If its manual could you provide the steps. Thank you. Appreciate your reply.
... View more
02-13-2018
04:31 AM
Dear All,
Need your help badly on the below error. I have setup all and trying to install one namenode and 2 datanodes and getting error regarding the repository. Kindly see the below details and help me out. I'm using redhat 7.4 release. This is a new set up and environment is on aws cloud. (Not old nodes) Error log is also attached.
Main error " Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match "
Other errors
"
Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 89, in <module> ApplicationTimelineServer().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 375, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 38, in install self.install_packages(env) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 811, in install_packages name = self.format_package_name(package['name']) File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 546, in format_package_name raise Fail("Cannot match package for regexp name {0}. Available packages: {1}".format(name, self.available_packages_in_repos)) resource_management.core.exceptions.Fail: Cannot match package for regexp name hadoop_${stack_version}-yarn. Available packages: ['accumulo', 'accumulo-conf-standalone', 'accumulo-source',
"
"2018-02-13 12:08:32,217 - The 'hadoop-hdfs-datanode' component did not advertise a version. This may indicate a problem with the component packaging. However, the stack-select tool was able to report a single version installed (2.6.4.0-91). This is the version that will be reported. Command aborted. Reason: 'Server considered task failed and automatically aborted it' "
$ yum repolist Loaded plugins: amazon-id, rhui-lb, search-disabled-repos repo id repo name status HDP-2.6-GPL-repo-1 HDP-2.6-GPL-repo-1 4 HDP-2.6-repo-1 HDP-2.6-repo-1 232 HDP-UTILS-1.1.0.22-repo-1 HDP-UTILS-1.1.0.22-repo-1 16 ambari-2.6.1.0 ambari Version - ambari-2.6.1.0 12 rhui-REGION-client-config-server-7/x86_64 Red Hat Update Infrastructure 2.0 Client Configuration Ser 1 rhui-REGION-rhel-server-releases/7Server/x86_64 Red Hat Enterprise Linux Server 7 (RPMs) 18,035 rhui-REGION-rhel-server-rh-common/7Server/x86_64 Red Hat Enterprise Linux Server 7 RH Common (RPMs) 231 repolist: 18,531
$ cat ambari.repo #VERSION_NUMBER=2.6.1.0-143 [ambari-2.6.1.0] name=ambari Version - ambari-2.6.1.0 baseurl=http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.6.1.0 gpgcheck=1 gpgkey=http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.6.1.0/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins enabled=1 priority=1
$ cat ambari-hdp-1.repo [HDP-2.6-repo-1] name=HDP-2.6-repo-1 baseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.4.0
path=/ enabled=1 gpgcheck=0 [HDP-2.6-GPL-repo-1] name=HDP-2.6-GPL-repo-1 baseurl=http://public-repo-1.hortonworks.com/HDP-GPL/centos7/2.x/updates/2.6.4.0
path=/ enabled=1 gpgcheck=0 [HDP-UTILS-1.1.0.22-repo-1] name=HDP-UTILS-1.1.0.22-repo-1 baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7
path=/ enabled=1 gpgcheck=0[root@eim-preprod-namenode-1]:/etc/yum.repos.d
... View more
Labels:
01-17-2018
10:19 AM
@Geoffrey Shelton Okot Thank you very much for the information.
... View more
01-16-2018
09:29 AM
@Geoffrey Shelton Okot I think HDP 2.4 is not downloadable from hortonworks site? Because we will be setting up new environment in which we will install the latest version and only latest one is downloadable. May be there will be someother link for 2.4. Even I think it might be a bug on the version. There is no hint found for this error apart from the regular steps you have provided. Below are details you have asked for. Ambari Server $ rpm -qa | grep -i ambari ambari-server-2.2.1.0-161.x86_64 $ rpm -qa | grep -i hadoop hadoop_2_4_0_0_169-mapreduce-2.7.1.2.4.0.0-169.el6.x86_64 hadoop_2_4_0_0_169-yarn-2.7.1.2.4.0.0-169.el6.x86_64 hadoop_2_4_0_0_169-libhdfs-2.7.1.2.4.0.0-169.el6.x86_64 hadoop_2_4_0_0_169-2.7.1.2.4.0.0-169.el6.x86_64 hadoop_2_4_0_0_169-hdfs-2.7.1.2.4.0.0-169.el6.x86_64 $ rpm -qa | grep -i ambari ambari-metrics-monitor-2.2.1.0-161.x86_64 ambari-metrics-collector-2.2.1.0-161.x86_64 ambari-agent-2.2.1.0-161.x86_64 ambari-metrics-hadoop-sink-2.2.1.0-161.x86_64 $ rpm -qa | grep -i hdp hdp-select-2.4.0.0-169.el6.noarch
... View more
01-16-2018
02:24 AM
@Geoffrey Shelton Okot Thanks will close the thread. Yes the steps are verified multiple times and we end up with that error. We have not subscribed for even hortonworks basic support, because of this risk we have not upgraded. In case we stuck up with some issues there is no one to help. Client is aware of this.
... View more
01-15-2018
03:14 PM
@Geoffrey Shelton Okot Sorry was stuck up in few issues and missed to reply. Yes the steps you have mentioned all followed. I was getting the error which i have shown in my first post. Hence i initialized this thread and you have provided the same steps which i have followed. Not sure what is wrong or some bug ? When ran with user A or B who are part of data_team in ACL. 😞 "$ hadoop fs -ls /abc/month=12 ls: Permission denied: user=A, access=EXECUTE, inode="/abc/month=12":abiuser:dfsusers:drwxrwx---"
... View more
01-11-2018
02:31 AM
@Sandeep Kumar Yes, I have referred those documents already and set as required. Problem is it is not allowing the user to read the file which got a proper permission in ACL. You may go through my initial postings with the steps. Thank you
... View more
01-11-2018
02:29 AM
@Geoffrey Shelton Okot First of all thanks for your time and outputs, samething been done with only one difference. I have given acl permission for the group data_team with r-x instead of individual users. In future there will be a requirement for other users to get only read access which I can do by just adding them to the group data_team in Linux. Hope this also should work. Below is the command I have used. hdfs dfs -setfacl -m -R group:data_team:r-x /abc/month=12 Could you create different file with abiuser as owner and dfsusers as group and add ACL for the group data_team with just read permission? Thank you.
... View more
01-10-2018
09:27 AM
@Geoffrey Shelton Okot 1. ACL feature is enabled by adding the below entry in custom hdfs-site.xml file and restarted the required services from ambari console. <property> <name>dfs.namenode.acls.enabled</name> <value>true</value> </property> 2. I gave sample as A and B user and they have been added to the group data_team (on Linux level), they are not abiuser. abiuser is the owner of the file. dfsusers is the group of that file (/abc/month=12/file1.bcsf). ACL permission added for the group data_team using the below command. hdfs dfs -setfacl -m -R group:data_team:r-x /abc/month=12/ 3. Above setup is done, but still user A and B not able to read or access the files where ACL permission been given.
... View more
01-08-2018
05:11 AM
Dear all, I have enabled ACL on the ambari console and restarted the required services and I'm able to set the permissions for specific group as well. But when they try to execute it is not working. Need your suggestions. My HDP version is 2.4 and hadoop 2.7. getfacl permission on the folder and file is: $ hdfs dfs -getfacl -R /abc/month=12/ # file: /abc/month=12 # owner: abiuser # group: dfsusers user::rwx group::r-x group:data_team:r-- mask::r-x other::--- default:user::rwx default:group::r-x default:group:data_team:r-x default:mask::r-x default:other::--- # file: /abc/month=12/file1.bcsf # owner: abiuser # group: dfsusers user::rwx group::r-- group:data_team:r-- mask::r-- other::--- user A and B are part of data_team, when they try to read the file we are getting the below error. $ hadoop fs -ls /abc/month=12 ls: Permission denied: user=A, access=EXECUTE, inode="/abc/month=12":abiuser:dfsusers:drwxrwx--- Appreciate any suggestion / help? Thank you
... View more
Labels:
08-02-2017
09:17 AM
@Sindhu It is been quite long time, we are still searching for a solution. Reading from Hive is fine, currently what we have is that for a single table there are many .bcsf files (ingested by abinitio) which needs to be read from hdfs and written to hive for further processing. I know few things like we need to know the table metadata schema to create a table in hive and then import it from hdfs all .bcsf files as input. Doubts are 1. Whether hive can read that format from hdfs? (your insight provides we can read from hive if i'm not wrong) 2. If yes, how will it identify and join all these .bcsf files history onto a single hive table? Appreciate your reply.
... View more
01-26-2017
08:11 AM
@Sindhu Will try and let you know. Thank you for the information.
... View more
01-23-2017
06:55 AM
Hi All, We have data files ingested via abinitio onto hadoop which is in block compressed sequential file (.bcsf) format. Currently it can be read by the abinitio component via their queryit engine / ODBC connect. This is being a limitation. Whether it can be read by hadoop components like hive or can be interpreted by map/reduce program? If so, could someone give me some steps or clarifications on the same? Thank you.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
11-21-2016
03:15 AM
@Kuldeep Kulkarni Thanks for your comment, we were using hdp 2.4 & hadoop 2.7.2. Not sure this feature is in there with this version. I enabled read only permission for .Trash folder for the users. Also i was having snapshop enabled for a directory, which is not protective enough as we can able to delete the folders inside a snapshot directory. May be upgrade would be the last option to explore and use it. Thank you much for the comments.
... View more