Support Questions

Find answers, ask questions, and share your expertise

namenode can't start after die

avatar
Explorer

2015-07-31 13:32:46,403 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.lang.NumberFormatException: For input string: "????^Ai^@^@^@???????^An^@^@^@???????^Al^@^@^@???????^Ae^@^@^@???????^At^@^@^@??+^@????o^@^@^@???????^Ap^@^@^@???????^Ae^@^@^@???????^An^@^@^@???????^Ai^@^@^@???????^An^@^@^@???????^Ag^@^@^@??+^@????h^@^@^@???????^Ay^@^@^@???????^As^@^@^@???????^Ai^@^@^@???????^Ac^@^@^@???????^As^@^@^@???????^A ^@^@^@???????^At^@^@^@???????^Au^@^@^@???????^At^@^@^@???????^Ao^@^@^@???????^Ar^@^@^@??+^@???? ^@^@^@???????^Ad^@^@^@???????^Ae^@^@^@???????^Al^@^@^@???????^Ag^@^@^@???????^Aa^@^@^@???????^Ad^@^@^@????^A??^Ao^@^@^@????^D??^Ai^@^@^@??+^@????l^@^@^@????^L??^Aa^@^@^@????^O??^Am^@^@^@????^R??^Ae^@^@^@????^U??^Al^@^@^@????^X??^Al^@^@^@????^[??^Aa^@^@^@????^^??^At^@^@^@????!??^Aa^@^@^@??+^@????r^@^@^@????(??^Ai^@^@^@????+??^Ac^@^@^@????.??^Ak^@^@^@????1??^Ae^@^@^@????4??^At^@^@^@????7??^A ^@^@^@????:??^At^@^@^@????=??^Ae^@^@^@????@??^Aa^@^@^@????C??^Am^@^@^@??+^@????h^@^@^@????J??^Ay^@^@^@????M??^Ap^@^@^@????P??^Ao^@^@^@????S??^Ag^@^@^@????V??^Al^@^@^@????Y??^Aa^@^@^@????\??^Au^@^@^@????_??^Ac^@^@^@????b??^Aa^@^@^@??+^@???? ^@^@^@????h??^Ad^@^@^@????k??^Ai^@^@^@????n??^As^@^@^@????q??^At^@^@^@????t??^Ar^@^@^@????w??^Ai^@^@^@????z??^Ac^@^@^@????}??^At^@^@^@??+^@????s^@^@^@??+^@????o^@^@^@???????^Ao^@^@^@???????^At^@^@^@???????^Ab^@^@^@???????^Aa^@^@^@???????^Al^@^@^@???????^Al^@^@^@???????^Ae^@^@^@???????^Ar^@^@^@??+^@????i^@^@^@???????^Ag^@^@^@???????^Ah^@^@^@???????^A ^@^@^@???????^As^@^@^@???????^Ac^@^@^@???????^Ah^@^@^@???????^Ao^@^@^@???????^Ao^@^@^@???????^Al^@^@^@??+^@????r^@^@^@???????^Ao^@^@^@???????^Ac^@^@^@???????^Ak^@^@^@???????^Ae^@^@^@???????^At^@^@^@???????^A ^@^@^@???????^Ar^@^@^@???????^Aa^@^@^@???????^An^@^@^@???????^Ag^@^@^@???????^Ae^@^@^@??+^@????k^@^@^@???????^Aa^@^@^@???????^Ar^@^@^@???????^Aa^@^@^@???????^Ag^@^@^@???????^Ae^@^@^@???????^Az^@^@^@???????^Ay^@^@^@????^B??^Aa^@^@^@????^E??^An^@^@^@??+^@????e^@^@^@????"??^Ai^@^@^@????"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)

1 ACCEPTED SOLUTION

avatar
Mentor
The trace is relieving in the fact that it fails on "readTransactionIdFile"
method, which tries to read the file called "seen_txid" inside the NN's
current/ local directory. Please try moving this file out (to /tmp or
elsewhere) and then restart NN.

Are you perchance running any form of disk encryption software that may not
be active yet? The corrupted data is weird - the file is supposed to have a
simple number in it.

View solution in original post

6 REPLIES 6

avatar
Mentor
Could you share the full exception trace? It appears your fsimage, or one
of your edit logs, has somehow gotten some corrupt data in it. If its the
edit log, you can perhaps attempt to skip a few entries in the 'hdfs
namenode -recover' startup mode; but if its the fsimage that is corrupt,
then we'll need to rollback to an older copy and replay edit logs on top.

The full exception trace can help tell which file is corrupt.

avatar
Explorer

************************************************************/
2015-07-31 13:32:44,351 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = xxxxxxxxxx
STARTUP_MSG: args = []
STARTUP_MSG: version = 2.6.0-cdh5.4.2
STARTUP_MSG: classpath = /var/run/cloudera-scm-agent/process/402-hdfs-NAMENODE:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/avro.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/logredactor-1.0.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/slf4j-log4j12.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/commons-collections-3.2.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/netty-3.6.2.Final.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/commons-beanutils-1.7.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/htrace-core-3.0.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/hue-plugins-3.7.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/aws-java-sdk-1.7.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/slf4j-api-1.7.5.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/httpclient-4.2.5.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//parquet-protobuf.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//parquet-format-sources.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//parquet-generator.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//parquet-pig-bundle.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//hadoop-auth.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//hadoop-nfs.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//parquet-common.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//parquet-hadoop-bundle.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//parquet-hadoop.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//parquet-cascading.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//parquet-tools.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//parquet-test-hadoop2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//parquet-format.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//parquet-scala_2.10.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//parquet-pig.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//parquet-column.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//hadoop-aws.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//parquet-scrooge_2.10.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//hadoop-nfs-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//parquet-thrift.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//hadoop-common-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//hadoop-common.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//parquet-avro.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//parquet-encoding.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//hadoop-annotations-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//parquet-format-javadoc.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//hadoop-auth-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//hadoop-annotations.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//hadoop-common-2.6.0-cdh5.4.2-tests.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//hadoop-common-tests.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//parquet-jackson.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/.//hadoop-aws-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/./:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/netty-3.6.2.Final.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/commons-daemon-1.0.13.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/htrace-core-3.0.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/.//hadoop-hdfs-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/.//hadoop-hdfs-tests.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/.//hadoop-hdfs-nfs.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/.//hadoop-hdfs-nfs-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/.//hadoop-hdfs.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-hdfs/.//hadoop-hdfs-2.6.0-cdh5.4.2-tests.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/jersey-client-1.9.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/commons-collections-3.2.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/jline-2.11.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/zookeeper.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/activation-1.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/spark-1.3.0-cdh5.4.2-yarn-shuffle.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/lib/commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-registry-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-client.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-server-common-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-server-tests.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-common-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-client-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-applications-unmanaged-am-launcher.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-server-applicationhistoryservice-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-api.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-server-common.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-server-nodemanager-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-api-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-registry.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-applications-distributedshell.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-server-tests-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-common.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-server-web-proxy-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-yarn/.//hadoop-yarn-server-resourcemanager-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/jersey-guice-1.9.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/avro.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/netty-3.6.2.Final.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/junit-4.11.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/guice-3.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/aopalliance-1.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/javax.inject-1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/asm-3.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/xz-1.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/guice-servlet-3.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/leveldbjni-all-1.8.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//jasper-compiler-5.5.23.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//avro.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//commons-lang-2.6.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//api-util-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//jersey-server-1.9.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-datajoin-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//microsoft-windowsazure-storage-sdk-0.6.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-auth.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//jackson-xc-1.8.8.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//commons-codec-1.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-sls-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-gridmix-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hamcrest-core-1.3.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//jersey-core-1.9.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//jetty-util-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//commons-net-3.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//servlet-api-2.5.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//xmlenc-0.52.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-rumen.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//paranamer-2.3.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//java-xmlbuilder-0.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//log4j-1.2.17.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//curator-framework-2.7.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.6.0-cdh5.4.2-tests.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-ant.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//commons-collections-3.2.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//jaxb-impl-2.2.3-1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-streaming-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-distcp-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//jetty-6.1.26.cloudera.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-core.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//commons-compress-1.4.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//curator-recipes-2.7.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//junit-4.11.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//jaxb-api-2.2.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//mockito-all-1.8.5.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//jets3t-0.9.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//jsp-api-2.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//gson-2.2.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//guava-11.0.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//commons-beanutils-core-1.8.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//commons-io-2.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//jersey-json-1.9.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-distcp.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//apacheds-kerberos-codec-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-sls.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//jettison-1.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//protobuf-java-2.5.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//stax-api-1.0-2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-nativetask.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//commons-cli-1.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//commons-beanutils-1.7.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-app-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-mapreduce-examples.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-azure.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//zookeeper.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-extras.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//activation-1.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-rumen-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-shuffle-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//jackson-databind-2.2.3.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-common-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//jackson-annotations-2.2.3.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//asm-3.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//jackson-jaxrs-1.8.8.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-archives.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//httpcore-4.2.5.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//snappy-java-1.0.4.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//commons-digester-1.8.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//commons-configuration-1.6.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-datajoin.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//commons-el-1.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//apacheds-i18n-2.0.0-M15.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//jsch-0.1.42.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-archives-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//jasper-runtime-5.5.23.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-extras-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//metrics-core-3.0.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-ant-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//htrace-core-3.0.4.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//jackson-core-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//xz-1.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//curator-client-2.7.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//api-asn1-api-1.0.0-M20.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//jackson-core-2.2.3.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-streaming.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-gridmix.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-auth-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//commons-httpclient-3.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//jsr305-3.0.0.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-jobclient-tests.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-mapreduce-client-hs-plugins.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//commons-math3-3.1.1.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//hadoop-azure-2.6.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//jackson-mapper-asl-1.8.8.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//commons-logging-1.1.3.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop-mapreduce/.//httpclient-4.2.5.jar:/usr/share/cmf/lib/plugins/event-publish-5.4.1-shaded.jar:/usr/share/cmf/lib/plugins/tt-instrumentation-5.4.1.jar:/usr/share/cmf/lib/plugins/cdh5/audit-plugin-cdh5-2.3.1-shaded.jar:/opt/cloudera/parcels/GPLEXTRAS-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/hadoop-lzo-0.4.15-cdh5.4.2.jar:/opt/cloudera/parcels/GPLEXTRAS-5.4.2-1.cdh5.4.2.p0.2/lib/hadoop/lib/hadoop-lzo.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/sentry/lib/plugins/sentry-hdfs-1.4.0-cdh5.4.2.jar:/opt/cloudera/parcels/CDH-5.4.2-1.cdh5.4.2.p0.2/lib/sentry/lib/plugins/sentry-hdfs.jar
STARTUP_MSG: build = http://github.com/cloudera/hadoop -r 15b703c8725733b7b2813d2325659eb7d57e7a3f; compiled by 'jenkins' on 2015-05-20T00:03Z
STARTUP_MSG: java = 1.7.0_67
************************************************************/
2015-07-31 13:32:44,385 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
2015-07-31 13:32:44,388 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: createNameNode []
2015-07-31 13:32:44,712 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2015-07-31 13:32:44,840 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2015-07-31 13:32:44,841 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2015-07-31 13:32:44,845 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: fs.defaultFS is hdfs:xxxxxxxxxx:8020
2015-07-31 13:32:44,845 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Clients are to use xxxxxxxx to access this namenode/service.
2015-07-31 13:32:45,210 INFO org.apache.hadoop.hdfs.DFSUtil: Starting Web-server for hdfs at: xxxxxxxxxx:50070
2015-07-31 13:32:45,271 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2015-07-31 13:32:45,277 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.namenode is not defined
2015-07-31 13:32:45,292 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2015-07-31 13:32:45,296 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2015-07-31 13:32:45,297 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2015-07-31 13:32:45,297 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2015-07-31 13:32:45,339 INFO org.apache.hadoop.http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2015-07-31 13:32:45,341 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2015-07-31 13:32:45,362 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50070
2015-07-31 13:32:45,363 INFO org.mortbay.log: jetty-6.1.26.cloudera.4
2015-07-31 13:32:45,726 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@xxxxxxxxxx:50070
2015-07-31 13:32:45,776 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-07-31 13:32:45,776 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2015-07-31 13:32:45,836 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: No KeyProvider found.
2015-07-31 13:32:45,847 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsLock is fair:true
2015-07-31 13:32:45,903 INFO org.apache.hadoop.util.HostsFileReader: Adding xxxxx to the list of included hosts from /var/run/cloudera-scm-agent/process/402-hdfs-NAMENODE/dfs_hosts_allow.txt
2015-07-31 13:32:45,903 INFO org.apache.hadoop.util.HostsFileReader: Adding xxxxxx to the list of included hosts from /var/run/cloudera-scm-agent/process/402-hdfs-NAMENODE/dfs_hosts_allow.txt
2015-07-31 13:32:45,903 INFO org.apache.hadoop.util.HostsFileReader: Adding xxxxxxxx to the list of included hosts from /var/run/cloudera-scm-agent/process/402-hdfs-NAMENODE/dfs_hosts_allow.txt
2015-07-31 13:32:45,938 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
2015-07-31 13:32:45,938 INFO org.apache.hadoop.hdfs.server.blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
2015-07-31 13:32:45,940 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
2015-07-31 13:32:45,940 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: The block deletion will start around 2015 Jul 31 13:32:45
2015-07-31 13:32:45,943 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlocksMap
2015-07-31 13:32:45,943 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2015-07-31 13:32:45,945 INFO org.apache.hadoop.util.GSet: 2.0% max memory 989.9 MB = 19.8 MB
2015-07-31 13:32:45,945 INFO org.apache.hadoop.util.GSet: capacity = 2^21 = 2097152 entries
2015-07-31 13:32:45,956 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: dfs.block.access.token.enable=false
2015-07-31 13:32:45,956 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: defaultReplication = 3
2015-07-31 13:32:45,956 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplication = 512
2015-07-31 13:32:45,956 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: minReplication = 1
2015-07-31 13:32:45,957 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxReplicationStreams = 20
2015-07-31 13:32:45,957 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: shouldCheckForEnoughRacks = true
2015-07-31 13:32:45,957 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: replicationRecheckInterval = 3000
2015-07-31 13:32:45,957 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: encryptDataTransfer = false
2015-07-31 13:32:45,957 INFO org.apache.hadoop.hdfs.server.blockmanagement.BlockManager: maxNumBlocksToLog = 1000
2015-07-31 13:32:45,964 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner = hdfs (auth:SIMPLE)
2015-07-31 13:32:45,964 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup = supergroup
2015-07-31 13:32:45,964 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled = false
2015-07-31 13:32:45,964 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: HA Enabled: false
2015-07-31 13:32:45,966 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Append Enabled: true
2015-07-31 13:32:46,196 INFO org.apache.hadoop.util.GSet: Computing capacity for map INodeMap
2015-07-31 13:32:46,196 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2015-07-31 13:32:46,196 INFO org.apache.hadoop.util.GSet: 1.0% max memory 989.9 MB = 9.9 MB
2015-07-31 13:32:46,197 INFO org.apache.hadoop.util.GSet: capacity = 2^20 = 1048576 entries
2015-07-31 13:32:46,198 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2015-07-31 13:32:46,209 INFO org.apache.hadoop.util.GSet: Computing capacity for map cachedBlocks
2015-07-31 13:32:46,209 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2015-07-31 13:32:46,209 INFO org.apache.hadoop.util.GSet: 0.25% max memory 989.9 MB = 2.5 MB
2015-07-31 13:32:46,210 INFO org.apache.hadoop.util.GSet: capacity = 2^18 = 262144 entries
2015-07-31 13:32:46,212 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
2015-07-31 13:32:46,212 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
2015-07-31 13:32:46,212 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
2015-07-31 13:32:46,217 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
2015-07-31 13:32:46,217 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
2015-07-31 13:32:46,217 INFO org.apache.hadoop.hdfs.server.namenode.top.metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2015-07-31 13:32:46,219 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache on namenode is enabled
2015-07-31 13:32:46,219 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2015-07-31 13:32:46,222 INFO org.apache.hadoop.util.GSet: Computing capacity for map NameNodeRetryCache
2015-07-31 13:32:46,222 INFO org.apache.hadoop.util.GSet: VM type = 64-bit
2015-07-31 13:32:46,223 INFO org.apache.hadoop.util.GSet: 0.029999999329447746% max memory 989.9 MB = 304.1 KB
2015-07-31 13:32:46,223 INFO org.apache.hadoop.util.GSet: capacity = 2^15 = 32768 entries
2015-07-31 13:32:46,228 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: ACLs enabled? false
2015-07-31 13:32:46,228 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: XAttrs enabled? true
2015-07-31 13:32:46,228 INFO org.apache.hadoop.hdfs.server.namenode.NNConf: Maximum size of an xattr: 16384
2015-07-31 13:32:46,238 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/dfs/nn/in_use.lock acquired by nodename 15848@hz01-safe-hadoop04.hz01.baidu.com
2015-07-31 13:32:46,403 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.lang.NumberFormatException: For input string: "????i???????n???????l???????e???????t??+????o???????p???????e???????n???????i???????n???????g??+????h???????y???????s???????i???????c???????s??????? ???????t???????u???????t???????o???????r??+???? ???????d???????e???????l???????g???????a???????d??????o??????i??+????l????
??a??????m??????e??????l??????l?????a??????t????!??a??+????r????(??i????+??c????.??k????1??e????4??t????7?? ????:??t????=??e????@??a????C??m??+????h????J??y????M??p????P??o????S??g????V??l????Y??a????\??u????_??c????b??a??+???? ????h??d????k??i????n??s????q??t????t??r????w??i????z??c????}??t??+????s??+????o???????o???????t???????b???????a???????l???????l???????e???????r??+????i???????g???????h??????? ???????s???????c???????h???????o???????o???????l??+????r???????o???????c???????k???????e???????t??????? ???????r???????a???????n???????g???????e??+????k???????a???????r???????a???????g???????e???????z???????y??????a??????n??+????e????"??i????"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:441)
at java.lang.Long.parseLong(Long.java:483)
at org.apache.hadoop.hdfs.util.PersistentLongFile.readFile(PersistentLongFile.java:98)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.readTransactionIdFile(NNStorage.java:426)
at org.apache.hadoop.hdfs.server.namenode.FSImageTransactionalStorageInspector.inspectDirectory(FSImageTransactionalStorageInspector.java:93)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.inspectStorageDirs(NNStorage.java:1012)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.readAndInspectDirs(NNStorage.java:1067)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:611)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:281)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1058)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:762)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:584)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:643)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:810)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:794)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1487)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1553)
2015-07-31 13:32:46,423 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2015-07-31 13:32:46,427 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at xxxxx
************************************************************/

avatar
Explorer

recover also fail.

 

15/07/31 14:56:11 INFO namenode.MetaRecoveryContext: starting recovery...
15/07/31 14:56:11 WARN namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
15/07/31 14:56:11 WARN namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
15/07/31 14:56:11 INFO namenode.FSNamesystem: No KeyProvider found.
15/07/31 14:56:11 INFO namenode.FSNamesystem: fsLock is fair:true
15/07/31 14:56:11 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
15/07/31 14:56:11 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
15/07/31 14:56:11 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
15/07/31 14:56:11 INFO blockmanagement.BlockManager: The block deletion will start around 2015 Jul 31 14:56:11
15/07/31 14:56:11 INFO util.GSet: Computing capacity for map BlocksMap
15/07/31 14:56:11 INFO util.GSet: VM type = 64-bit
15/07/31 14:56:11 INFO util.GSet: 2.0% max memory 958.5 MB = 19.2 MB
15/07/31 14:56:11 INFO util.GSet: capacity = 2^21 = 2097152 entries
15/07/31 14:56:12 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
15/07/31 14:56:12 INFO blockmanagement.BlockManager: defaultReplication = 3
15/07/31 14:56:12 INFO blockmanagement.BlockManager: maxReplication = 512
15/07/31 14:56:12 INFO blockmanagement.BlockManager: minReplication = 1
15/07/31 14:56:12 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
15/07/31 14:56:12 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks = true
15/07/31 14:56:12 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
15/07/31 14:56:12 INFO blockmanagement.BlockManager: encryptDataTransfer = false
15/07/31 14:56:12 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
15/07/31 14:56:12 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
15/07/31 14:56:12 INFO namenode.FSNamesystem: supergroup = supergroup
15/07/31 14:56:12 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/07/31 14:56:12 INFO namenode.FSNamesystem: HA Enabled: false
15/07/31 14:56:12 INFO namenode.FSNamesystem: Append Enabled: true
15/07/31 14:56:12 INFO util.GSet: Computing capacity for map INodeMap
15/07/31 14:56:12 INFO util.GSet: VM type = 64-bit
15/07/31 14:56:12 INFO util.GSet: 1.0% max memory 958.5 MB = 9.6 MB
15/07/31 14:56:12 INFO util.GSet: capacity = 2^20 = 1048576 entries
15/07/31 14:56:12 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/07/31 14:56:12 INFO util.GSet: Computing capacity for map cachedBlocks
15/07/31 14:56:12 INFO util.GSet: VM type = 64-bit
15/07/31 14:56:12 INFO util.GSet: 0.25% max memory 958.5 MB = 2.4 MB
15/07/31 14:56:12 INFO util.GSet: capacity = 2^18 = 262144 entries
15/07/31 14:56:12 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/07/31 14:56:12 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/07/31 14:56:12 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
15/07/31 14:56:12 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
15/07/31 14:56:12 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
15/07/31 14:56:12 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
15/07/31 14:56:12 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/07/31 14:56:12 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/07/31 14:56:12 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/07/31 14:56:12 INFO util.GSet: VM type = 64-bit
15/07/31 14:56:12 INFO util.GSet: 0.029999999329447746% max memory 958.5 MB = 294.5 KB
15/07/31 14:56:12 INFO util.GSet: capacity = 2^15 = 32768 entries
15/07/31 14:56:12 INFO namenode.NNConf: ACLs enabled? false
15/07/31 14:56:12 INFO namenode.NNConf: XAttrs enabled? true
15/07/31 14:56:12 INFO namenode.NNConf: Maximum size of an xattr: 16384
15/07/31 14:56:12 INFO hdfs.StateChange: STATE* Safe mode is ON.
It was turned on manually. Use "hdfs dfsadmin -safemode leave" to turn safe mode off.
15/07/31 14:56:12 INFO common.Storage: Lock on /home/dfs/nn/in_use.lock acquired by nodename 7504@hz01-safe-hadoop04.hz01.baidu.com
15/07/31 14:56:12 INFO namenode.MetaRecoveryContext: RECOVERY FAILED: caught exception
java.lang.NumberFormatException: For input string: "????i???????n???????l???????e???????t??+????o???????p???????e???????n???????i???????n???????g??+????h???????y???????s???????i???????c???????s??????? ???????t???????u???????t???????o???????r??+???? ???????d???????e???????l???????g???????a???????d??????o??????i??+????l????
??a??????m??????e??????l??????l?????a??????t????!??a??+????r????(??i????+??c????.??k????1??e????4??t????7?? ????:??t????=??e????@??a????C??m??+????h????J??y????M??p????P??o????S??g????V??l????Y??a????\??u????_??c????b??a??+???? ????h??d????k??i????n??s????q??t????t??r????w??i????z??c????}??t??+????s??+????o???????o???????t???????b???????a???????l???????l???????e???????r??+????i???????g???????h??????? ???????s???????c???????h???????o???????o???????l??+????r???????o???????c???????k???????e???????t??????? ???????r???????a???????n???????g???????e??+????k???????a???????r???????a???????g???????e???????z???????y??????a??????n??+????e????"??i????"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:441)
at java.lang.Long.parseLong(Long.java:483)
at org.apache.hadoop.hdfs.util.PersistentLongFile.readFile(PersistentLongFile.java:98)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.readTransactionIdFile(NNStorage.java:426)
at org.apache.hadoop.hdfs.server.namenode.FSImageTransactionalStorageInspector.inspectDirectory(FSImageTransactionalStorageInspector.java:93)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.inspectStorageDirs(NNStorage.java:1012)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.readAndInspectDirs(NNStorage.java:1067)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:611)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:281)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1058)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:762)
at org.apache.hadoop.hdfs.server.namenode.NameNode.doRecovery(NameNode.java:1381)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1471)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1553)
15/07/31 14:56:12 ERROR namenode.NameNode: Failed to start namenode.
java.lang.NumberFormatException: For input string: "????i???????n???????l???????e???????t??+????o???????p???????e???????n???????i???????n???????g??+????h???????y???????s???????i???????c???????s??????? ???????t???????u???????t???????o???????r??+???? ???????d???????e???????l???????g???????a???????d??????o??????i??+????l????
??a??????m??????e??????l??????l?????a??????t????!??a??+????r????(??i????+??c????.??k????1??e????4??t????7?? ????:??t????=??e????@??a????C??m??+????h????J??y????M??p????P??o????S??g????V??l????Y??a????\??u????_??c????b??a??+???? ????h??d????k??i????n??s????q??t????t??r????w??i????z??c????}??t??+????s??+????o???????o???????t???????b???????a???????l???????l???????e???????r??+????i???????g???????h??????? ???????s???????c???????h???????o???????o???????l??+????r???????o???????c???????k???????e???????t??????? ???????r???????a???????n???????g???????e??+????k???????a???????r???????a???????g???????e???????z???????y??????a??????n??+????e????"??i????"
at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65)
at java.lang.Long.parseLong(Long.java:441)
at java.lang.Long.parseLong(Long.java:483)
at org.apache.hadoop.hdfs.util.PersistentLongFile.readFile(PersistentLongFile.java:98)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.readTransactionIdFile(NNStorage.java:426)
at org.apache.hadoop.hdfs.server.namenode.FSImageTransactionalStorageInspector.inspectDirectory(FSImageTransactionalStorageInspector.java:93)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.inspectStorageDirs(NNStorage.java:1012)
at org.apache.hadoop.hdfs.server.namenode.NNStorage.readAndInspectDirs(NNStorage.java:1067)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:611)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:281)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1058)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:762)
at org.apache.hadoop.hdfs.server.namenode.NameNode.doRecovery(NameNode.java:1381)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1471)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1553)
15/07/31 14:56:12 INFO util.ExitUtil: Exiting with status 1
15/07/31 14:56:12 INFO namenode.NameNode: SHUTDOWN_MSG:

avatar
Mentor
The trace is relieving in the fact that it fails on "readTransactionIdFile"
method, which tries to read the file called "seen_txid" inside the NN's
current/ local directory. Please try moving this file out (to /tmp or
elsewhere) and then restart NN.

Are you perchance running any form of disk encryption software that may not
be active yet? The corrupted data is weird - the file is supposed to have a
simple number in it.

avatar
Explorer
Thank you very much. It works.

avatar
New Contributor

Hi ,

 

I am also getting same error message in the namenode logs. I have tried the below solution but in my case seen_txid file present only in the folder tmp/hadoop-root/dfs/name/current.

 

Any other solution?