Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

MrBench not running on our hadoop multi node cluster

Highlighted

MrBench not running on our hadoop multi node cluster

Explorer

We tried running some benchmarks on our hadoop multi node cluster but the following error occured

[hadoop1@hadoop-master hadoop1]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar mrbench -numRuns 10MRBenchmark.0.0.2
16/02/26 19:58:38 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/02/26 19:58:38 INFO mapred.MRBench: creating control file: 1 numLines, ASCENDING sortOrder
16/02/26 19:58:39 INFO mapred.MRBench: created control file: /benchmarks/MRBench/mr_input/input_-1334351305.txt
16/02/26 19:58:39 INFO mapred.MRBench: Running job 0: input=hdfs://hadoop-master:9000/benchmarks/MRBench/mr_input output=hdfs://hadoop-master:9000/benchmarks/MRBench/mr_output/output_-1508661308
16/02/26 19:58:39 INFO client.RMProxy: Connecting to ResourceManager at hadoop-master/172.17.25.28:8032
16/02/26 19:58:39 INFO client.RMProxy: Connecting to ResourceManager at hadoop-master/172.17.25.28:8032
16/02/26 19:58:40 INFO mapred.FileInputFormat: Total input paths to process : 4
16/02/26 19:58:40 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.IOException: Bad connect ack with firstBadLink as 172.17.25.7:50010
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1460)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
16/02/26 19:58:40 INFO hdfs.DFSClient: Abandoning BP-1525056140-172.17.25.28-1456424529405:blk_1073742299_1475
16/02/26 19:58:40 INFO hdfs.DFSClient: Excluding datanode 172.17.25.7:50010
16/02/26 19:58:40 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.IOException: Bad connect ack with firstBadLink as 172.17.25.4:50010
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1460)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
    at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
16/02/26 19:58:40 INFO hdfs.DFSClient: Abandoning BP-1525056140-172.17.25.28-1456424529405:blk_1073742300_1476
16/02/26 19:58:40 INFO hdfs.DFSClient: Excluding datanode 172.17.25.4:50010
16/02/26 19:58:40 INFO mapreduce.JobSubmitter: number of splits:4
16/02/26 19:58:40 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1456496787816_0001
16/02/26 19:58:40 INFO impl.YarnClientImpl: Submitted application application_1456496787816_0001
16/02/26 19:58:41 INFO mapreduce.Job: The url to track the job: http://hadoop-master:8088/proxy/application_1456496787816_0001/
16/02/26 19:58:41 INFO mapreduce.Job: Running job: job_1456496787816_0001
16/02/26 19:58:56 INFO mapreduce.Job: Job job_1456496787816_0001 running in uber mode : false
16/02/26 19:58:56 INFO mapreduce.Job:  map 0% reduce 0%
16/02/26 19:58:57 INFO mapreduce.Job: Job job_1456496787816_0001 failed with state FAILED due to: Application application_1456496787816_0001 failed 2 times due to AM Container for appattempt_1456496787816_0001_000002 exited with  exitCode: -103
For more detailed output, check application tracking page:http://hadoop-master:8088/proxy/application_1456496787816_0001/Then, click on links to logs of each attempt.
Diagnostics: Container [pid=9042,containerID=container_1456496787816_0001_02_000001] is running beyond virtual memory limits. Current usage: 459.2 MB of 1 GB physical memory used; 2.5 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1456496787816_0001_02_000001 :
    |- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
    |- 9050 9042 9042 9042 (java) 1045 40 2679939072 116856 /opt/jdk1.8.0_65/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/hadoop1/hadoop1/logs/userlogs/application_1456496787816_0001/container_1456496787816_0001_02_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA -Xmx768m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 
    |- 9042 9040 9042 9042 (bash) 0 0 17035264 708 /bin/bash -c /opt/jdk1.8.0_65/bin/java -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/hadoop1/hadoop1/logs/userlogs/application_1456496787816_0001/container_1456496787816_0001_02_000001 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA  -Xmx768m org.apache.hadoop.mapreduce.v2.app.MRAppMaster 1>/home/hadoop1/hadoop1/logs/userlogs/application_1456496787816_0001/container_1456496787816_0001_02_000001/stdout 2>/home/hadoop1/hadoop1/logs/userlogs/application_1456496787816_0001/container_1456496787816_0001_02_000001/stderr  

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143
Failing this attempt. Failing the application.
16/02/26 19:58:57 INFO mapreduce.Job: Counters: 0
java.io.IOException: Job failed!
    at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)
    at org.apache.hadoop.mapred.MRBench.runJobInSequence(MRBench.java:192)
    at org.apache.hadoop.mapred.MRBench.run(MRBench.java:290)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)
    at org.apache.hadoop.mapred.MRBench.main(MRBench.java:212)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
    at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
    at org.apache.hadoop.test.MapredTestDriver.run(MapredTestDriver.java:118)
    at org.apache.hadoop.test.MapredTestDriver.main(MapredTestDriver.java:126)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)

The yarn-site.xml is as follows :-

<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>
 <property>
  <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
 </property>

<property>
<name>yarn.resourcemanager.address</name>
<value>hadoop-master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>hadoop-master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>hadoop-master:8031</value>
</property>
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <value>128</value>
        <description>Minimum limit of memory to allocate to each container request at the Resource Manager.</description>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>2048</value>
        <description>Maximum limit of memory to allocate to each container request at the Resource Manager.</description>
    </property>
    <property>
        <name>yarn.scheduler.minimum-allocation-vcores</name>
        <value>1</value>
        <description>The minimum allocation for every container request at the RM, in terms of virtual CPU cores. Requests lower than this won't take effect, and the specified value will get allocated the minimum.</description>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-vcores</name>
        <value>2</value>
        <description>The maximum allocation for every container request at the RM, in terms of virtual CPU cores. Requests higher than this won't take effect, and will get capped to this value.</description>
    </property>
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>48960</value>
        <description>Physical memory, in MB, to be made available to running containers</description>
    </property>
    <property>
        <name>yarn.nodemanager.resource.cpu-vcores</name>
        <value>4</value>
        <description>Number of CPU cores that can be allocated for containers.</description>
    </property>
<property>
   <name>yarn.nodemanager.vmem-check-enabled</name>
   <value>false</value>
   <description>Whether virtual memory limits will be enforced for containers</description>
</property>

<property>
   <name>yarn.nodemanager.vmem-pmem-ratio</name>
   <value>4</value>
   <description>Ratio between virtual memory to physical memory when setting memory limits for containers</description>
</property>
</configuration>

It is some memory problem. Our system namenode has 64GB RAM and 3TB hard disk. Kindly help us with this problem.

------------------------------------------EDIT-----------------------------

I re-installed everything again. The error in running teragen on namenode is:-

[hadoop1@localhost hadoop1]$ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar teragen 1000 /user/hd/ti
16/02/29 09:50:31 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/02/29 09:50:31 INFO client.RMProxy: Connecting to ResourceManager at hadoop-master/172.17.25.4:8032
16/02/29 09:50:32 INFO terasort.TeraSort: Generating 1000 using 2
16/02/29 09:50:32 INFO hdfs.DFSClient: Exception in createBlockOutputStream
java.io.IOException: Bad connect ack with firstBadLink as 172.17.25.7:50010
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1460)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1361)
	at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
16/02/29 09:50:32 INFO hdfs.DFSClient: Abandoning BP-1347490420-127.0.0.1-1456718019488:blk_1073741826_1002
16/02/29 09:50:32 INFO hdfs.DFSClient: Excluding datanode 172.17.25.7:50010
16/02/29 09:50:33 INFO mapreduce.JobSubmitter: number of splits:2
16/02/29 09:50:33 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1456719068231_0001
16/02/29 09:50:34 INFO impl.YarnClientImpl: Submitted application application_1456719068231_0001
16/02/29 09:50:34 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1456719068231_0001/
16/02/29 09:50:34 INFO mapreduce.Job: Running job: job_1456719068231_0001
16/02/29 09:50:40 INFO mapreduce.Job: Job job_1456719068231_0001 running in uber mode : false
16/02/29 09:50:40 INFO mapreduce.Job:  map 0% reduce 0%
16/02/29 09:50:45 INFO mapreduce.Job:  map 50% reduce 0%
16/02/29 09:50:47 INFO mapreduce.Job: Task Id : attempt_1456719068231_0001_m_000000_0, Status : FAILED
Container [pid=7991,containerID=container_1456719068231_0001_01_000002] is running beyond virtual memory limits. Current usage: 141.6 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1456719068231_0001_01_000002 :
	|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
	|- 7991 7989 7991 7991 (bash) 0 0 17043456 708 /bin/bash -c /opt/jdk1.8.0_65/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Xmx768m -Djava.io.tmpdir=/tmp/hadoop-hadoop1/nm-local-dir/usercache/hadoop1/appcache/application_1456719068231_0001/container_1456719068231_0001_01_000002/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/hadoop1/hadoop1/logs/userlogs/application_1456719068231_0001/container_1456719068231_0001_01_000002 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 127.0.0.1 44148 attempt_1456719068231_0001_m_000000_0 2 1>/home/hadoop1/hadoop1/logs/userlogs/application_1456719068231_0001/container_1456719068231_0001_01_000002/stdout 2>/home/hadoop1/hadoop1/logs/userlogs/application_1456719068231_0001/container_1456719068231_0001_01_000002/stderr  
	|- 7996 7991 7991 7991 (java) 258 9 2545680384 35546 /opt/jdk1.8.0_65/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx768m -Djava.io.tmpdir=/tmp/hadoop-hadoop1/nm-local-dir/usercache/hadoop1/appcache/application_1456719068231_0001/container_1456719068231_0001_01_000002/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/hadoop1/hadoop1/logs/userlogs/application_1456719068231_0001/container_1456719068231_0001_01_000002 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 127.0.0.1 44148 attempt_1456719068231_0001_m_000000_0 2 

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

16/02/29 09:50:55 INFO mapreduce.Job: Task Id : attempt_1456719068231_0001_m_000000_1, Status : FAILED
Container [pid=8111,containerID=container_1456719068231_0001_01_000004] is running beyond virtual memory limits. Current usage: 136.8 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1456719068231_0001_01_000004 :
	|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
	|- 8111 8109 8111 8111 (bash) 0 0 17043456 682 /bin/bash -c /opt/jdk1.8.0_65/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Xmx768m -Djava.io.tmpdir=/tmp/hadoop-hadoop1/nm-local-dir/usercache/hadoop1/appcache/application_1456719068231_0001/container_1456719068231_0001_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/hadoop1/hadoop1/logs/userlogs/application_1456719068231_0001/container_1456719068231_0001_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 127.0.0.1 44148 attempt_1456719068231_0001_m_000000_1 4 1>/home/hadoop1/hadoop1/logs/userlogs/application_1456719068231_0001/container_1456719068231_0001_01_000004/stdout 2>/home/hadoop1/hadoop1/logs/userlogs/application_1456719068231_0001/container_1456719068231_0001_01_000004/stderr  
	|- 8115 8111 8111 8111 (java) 248 10 2544230400 34349 /opt/jdk1.8.0_65/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx768m -Djava.io.tmpdir=/tmp/hadoop-hadoop1/nm-local-dir/usercache/hadoop1/appcache/application_1456719068231_0001/container_1456719068231_0001_01_000004/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/hadoop1/hadoop1/logs/userlogs/application_1456719068231_0001/container_1456719068231_0001_01_000004 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 127.0.0.1 44148 attempt_1456719068231_0001_m_000000_1 4 

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

16/02/29 09:51:01 INFO mapreduce.Job: Task Id : attempt_1456719068231_0001_m_000000_2, Status : FAILED
Container [pid=8188,containerID=container_1456719068231_0001_01_000005] is running beyond virtual memory limits. Current usage: 141.1 MB of 1 GB physical memory used; 2.4 GB of 2.1 GB virtual memory used. Killing container.
Dump of the process-tree for container_1456719068231_0001_01_000005 :
	|- PID PPID PGRPID SESSID CMD_NAME USER_MODE_TIME(MILLIS) SYSTEM_TIME(MILLIS) VMEM_USAGE(BYTES) RSSMEM_USAGE(PAGES) FULL_CMD_LINE
	|- 8211 8188 8188 8188 (java) 267 11 2545262592 35427 /opt/jdk1.8.0_65/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN -Xmx768m -Djava.io.tmpdir=/tmp/hadoop-hadoop1/nm-local-dir/usercache/hadoop1/appcache/application_1456719068231_0001/container_1456719068231_0001_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/hadoop1/hadoop1/logs/userlogs/application_1456719068231_0001/container_1456719068231_0001_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 127.0.0.1 44148 attempt_1456719068231_0001_m_000000_2 5 
	|- 8188 8186 8188 8188 (bash) 0 0 17043456 700 /bin/bash -c /opt/jdk1.8.0_65/bin/java -Djava.net.preferIPv4Stack=true -Dhadoop.metrics.log.level=WARN  -Xmx768m -Djava.io.tmpdir=/tmp/hadoop-hadoop1/nm-local-dir/usercache/hadoop1/appcache/application_1456719068231_0001/container_1456719068231_0001_01_000005/tmp -Dlog4j.configuration=container-log4j.properties -Dyarn.app.container.log.dir=/home/hadoop1/hadoop1/logs/userlogs/application_1456719068231_0001/container_1456719068231_0001_01_000005 -Dyarn.app.container.log.filesize=0 -Dhadoop.root.logger=INFO,CLA org.apache.hadoop.mapred.YarnChild 127.0.0.1 44148 attempt_1456719068231_0001_m_000000_2 5 1>/home/hadoop1/hadoop1/logs/userlogs/application_1456719068231_0001/container_1456719068231_0001_01_000005/stdout 2>/home/hadoop1/hadoop1/logs/userlogs/application_1456719068231_0001/container_1456719068231_0001_01_000005/stderr  

Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

Now one of the datanode logs the following error and the :-

2016-02-28 17:49:22,036 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = hadoop-slave-1/172.17.25.18
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.6.0
STARTUP_MSG:   classpath = /home/hadoop1/hadoop1/etc/hadoop:/home/hadoop1/hadoop1/share/hadoop/common/lib/hadoop-auth-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/commons-logging-1.1.3.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/commons-codec-1.4.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/jsch-0.1.42.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/curator-framework-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/commons-digester-1.8.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/xz-1.0.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/commons-net-3.1.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/commons-cli-1.2.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/servlet-api-2.5.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/commons-lang-2.6.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/jersey-server-1.9.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/log4j-1.2.17.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/asm-3.2.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/stax-api-1.0-2.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/jersey-core-1.9.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/httpcore-4.2.5.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/hadoop-annotations-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/zookeeper-3.4.6.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/commons-io-2.4.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/paranamer-2.3.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/guava-11.0.2.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/activation-1.1.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/jets3t-0.9.0.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/commons-el-1.0.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/curator-recipes-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/commons-math3-3.1.1.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/jetty-6.1.26.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/jettison-1.1.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/junit-4.11.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/htrace-core-3.0.4.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/jsp-api-2.1.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/httpclient-4.2.5.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/jersey-json-1.9.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/curator-client-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/xmlenc-0.52.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/hamcrest-core-1.3.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/avro-1.7.4.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/gson-2.2.4.jar:/home/hadoop1/hadoop1/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/hadoop1/hadoop1/share/hadoop/common/hadoop-common-2.6.0-tests.jar:/home/hadoop1/hadoop1/share/hadoop/common/hadoop-common-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/common/hadoop-nfs-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/asm-3.2.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/commons-io-2.4.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/htrace-core-3.0.4.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/hadoop-hdfs-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/hadoop-hdfs-2.6.0-tests.jar:/home/hadoop1/hadoop1/share/hadoop/hdfs/hadoop-hdfs-nfs-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/commons-codec-1.4.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/xz-1.0.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/commons-cli-1.2.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/servlet-api-2.5.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/commons-lang-2.6.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/javax.inject-1.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/asm-3.2.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/guice-3.0.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/commons-io-2.4.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/guava-11.0.2.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/activation-1.1.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/commons-httpclient-3.1.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/jetty-6.1.26.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/jettison-1.1.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/commons-collections-3.2.1.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/jline-0.9.94.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/jsr305-1.3.9.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/jersey-json-1.9.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/lib/jersey-client-1.9.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/hadoop-yarn-common-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/hadoop-yarn-registry-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/hadoop-yarn-api-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/hadoop-yarn-client-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/hadoop-yarn-server-tests-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/hadoop-yarn-server-common-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/hadoop-annotations-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/junit-4.11.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.6.0.jar:/home/hadoop1/hadoop1/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.6.0-tests.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar:/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG:   java = 1.8.0_65
************************************************************/
2016-02-28 17:49:22,044 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: registered UNIX signal handlers for [TERM, HUP, INT]
2016-02-28 17:49:22,491 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2016-02-28 17:49:22,546 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2016-02-28 17:49:22,546 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: DataNode metrics system started
2016-02-28 17:49:22,550 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Configured hostname is hadoop-slave-1
2016-02-28 17:49:22,556 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting DataNode with maxLockedMemory = 0
2016-02-28 17:49:22,573 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened streaming server at /0.0.0.0:50010
2016-02-28 17:49:22,575 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is 1048576 bytes/s
2016-02-28 17:49:22,575 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Number threads for balancing is 5
2016-02-28 17:49:22,626 INFO org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2016-02-28 17:49:22,629 INFO org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.datanode is not defined
2016-02-28 17:49:22,637 INFO org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2016-02-28 17:49:22,638 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context datanode
2016-02-28 17:49:22,638 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2016-02-28 17:49:22,638 INFO org.apache.hadoop.http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2016-02-28 17:49:22,649 INFO org.apache.hadoop.http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.datanode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2016-02-28 17:49:22,651 INFO org.apache.hadoop.http.HttpServer2: Jetty bound to port 50075
2016-02-28 17:49:22,651 INFO org.mortbay.log: jetty-6.1.26
2016-02-28 17:49:22,887 INFO org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50075
2016-02-28 17:49:23,012 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: dnUserName = hadoop1
2016-02-28 17:49:23,012 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: supergroup = supergroup
2016-02-28 17:49:23,038 INFO org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
2016-02-28 17:49:23,049 INFO org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 50020
2016-02-28 17:49:23,069 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Opened IPC server at /0.0.0.0:50020
2016-02-28 17:49:23,077 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Refresh request received for nameservices: null
2016-02-28 17:49:23,092 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting BPOfferServices for nameservices: <default>
2016-02-28 17:49:23,098 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool <registering> (Datanode Uuid unassigned) service to hadoop-master/172.17.25.4:9000 starting to offer service
2016-02-28 17:49:23,101 INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2016-02-28 17:49:23,101 INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
2016-02-28 17:49:23,221 INFO org.apache.hadoop.hdfs.server.common.Storage: DataNode version: -56 and NameNode layout version: -60
2016-02-28 17:49:23,252 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/hadoop1/hadoopdata/hdfs/datanode/in_use.lock acquired by nodename 7503@hadoop-slave-1
2016-02-28 17:49:23,253 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /home/hadoop1/hadoopdata/hdfs/datanode is not formatted for BP-1347490420-127.0.0.1-1456718019488
2016-02-28 17:49:23,253 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2016-02-28 17:49:23,312 INFO org.apache.hadoop.hdfs.server.common.Storage: Analyzing storage directories for bpid BP-1347490420-127.0.0.1-1456718019488
2016-02-28 17:49:23,313 INFO org.apache.hadoop.hdfs.server.common.Storage: Locking is disabled
2016-02-28 17:49:23,313 INFO org.apache.hadoop.hdfs.server.common.Storage: Storage directory /home/hadoop1/hadoopdata/hdfs/datanode/current/BP-1347490420-127.0.0.1-1456718019488 is not formatted.
2016-02-28 17:49:23,313 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting ...
2016-02-28 17:49:23,313 INFO org.apache.hadoop.hdfs.server.common.Storage: Formatting block pool BP-1347490420-127.0.0.1-1456718019488 directory /home/hadoop1/hadoopdata/hdfs/datanode/current/BP-1347490420-127.0.0.1-1456718019488/current
2016-02-28 17:49:23,345 INFO org.apache.hadoop.hdfs.server.common.Storage: Restored 0 block files from trash.
2016-02-28 17:49:23,352 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Setting up storage: nsid=556380454;bpid=BP-1347490420-127.0.0.1-1456718019488;lv=-56;nsInfo=lv=-60;cid=CID-a74224c2-5c44-4cf0-b4eb-8bfb04f755cb;nsid=556380454;c=0;bpid=BP-1347490420-127.0.0.1-1456718019488;dnuuid=null
2016-02-28 17:49:23,387 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Generated and persisted new Datanode UUID 3ef9f292-ccaf-40d5-9785-f2cbaf59d199
2016-02-28 17:49:23,410 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added new volume: /home/hadoop1/hadoopdata/hdfs/datanode/current
2016-02-28 17:49:23,411 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Added volume - /home/hadoop1/hadoopdata/hdfs/datanode/current, StorageType: DISK
2016-02-28 17:49:23,415 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Registered FSDatasetState MBean
2016-02-28 17:49:23,417 INFO org.apache.hadoop.hdfs.server.datanode.DirectoryScanner: Periodic Directory Tree Verification scan starting at 1456677049417 with interval 21600000
2016-02-28 17:49:23,417 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding block pool BP-1347490420-127.0.0.1-1456718019488
2016-02-28 17:49:23,418 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Scanning block pool BP-1347490420-127.0.0.1-1456718019488 on volume /home/hadoop1/hadoopdata/hdfs/datanode/current...
2016-02-28 17:49:23,424 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time taken to scan block pool BP-1347490420-127.0.0.1-1456718019488 on /home/hadoop1/hadoopdata/hdfs/datanode/current: 7ms
2016-02-28 17:49:23,424 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to scan all replicas for block pool BP-1347490420-127.0.0.1-1456718019488: 7ms
2016-02-28 17:49:23,425 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Adding replicas to map for block pool BP-1347490420-127.0.0.1-1456718019488 on volume /home/hadoop1/hadoopdata/hdfs/datanode/current...
2016-02-28 17:49:23,425 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Time to add replicas to map for block pool BP-1347490420-127.0.0.1-1456718019488 on volume /home/hadoop1/hadoopdata/hdfs/datanode/current: 0ms
2016-02-28 17:49:23,425 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl: Total time to add all replicas to map: 1ms
2016-02-28 17:49:23,427 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool BP-1347490420-127.0.0.1-1456718019488 (Datanode Uuid null) service to hadoop-master/172.17.25.4:9000 beginning handshake with NN
2016-02-28 17:49:23,443 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Block pool Block pool BP-1347490420-127.0.0.1-1456718019488 (Datanode Uuid null) service to hadoop-master/172.17.25.4:9000 successfully registered with NN
2016-02-28 17:49:23,443 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: For namenode hadoop-master/172.17.25.4:9000 using DELETEREPORT_INTERVAL of 300000 msec  BLOCKREPORT_INTERVAL of 21600000msec CACHEREPORT_INTERVAL of 10000msec Initial delay: 0msec; heartBeatInterval=3000
2016-02-28 17:49:23,526 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Namenode Block pool BP-1347490420-127.0.0.1-1456718019488 (Datanode Uuid 3ef9f292-ccaf-40d5-9785-f2cbaf59d199) service to hadoop-master/172.17.25.4:9000 trying to claim ACTIVE state with txid=10
2016-02-28 17:49:23,526 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Acknowledging ACTIVE Namenode Block pool BP-1347490420-127.0.0.1-1456718019488 (Datanode Uuid 3ef9f292-ccaf-40d5-9785-f2cbaf59d199) service to hadoop-master/172.17.25.4:9000
2016-02-28 17:49:23,556 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Sent 1 blockreports 0 blocks total. Took 1 msec to generate and 29 msecs for RPC and NN processing.  Got back commands org.apache.hadoop.hdfs.server.protocol.FinalizeCommand@6eca2084
2016-02-28 17:49:23,556 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Got finalize command for block pool BP-1347490420-127.0.0.1-1456718019488
2016-02-28 17:49:23,560 INFO org.apache.hadoop.util.GSet: Computing capacity for map BlockMap
2016-02-28 17:49:23,560 INFO org.apache.hadoop.util.GSet: VM type       = 64-bit
2016-02-28 17:49:23,560 INFO org.apache.hadoop.util.GSet: 0.5% max memory 889 MB = 4.4 MB
2016-02-28 17:49:23,560 INFO org.apache.hadoop.util.GSet: capacity      = 2^19 = 524288 entries
2016-02-28 17:49:23,561 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Periodic Block Verification Scanner initialized with interval 504 hours for block pool BP-1347490420-127.0.0.1-1456718019488
2016-02-28 17:49:23,564 INFO org.apache.hadoop.hdfs.server.datanode.DataBlockScanner: Added bpid=BP-1347490420-127.0.0.1-1456718019488 to blockPoolScannerMap, new size=1
2016-02-28 17:58:59,144 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1347490420-127.0.0.1-1456718019488:blk_1073741827_1003 src: /172.17.25.28:44644 dest: /172.17.25.18:50010
2016-02-28 17:58:59,263 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.17.25.28:44644, dest: /172.17.25.18:50010, bytes: 171, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-456092612_1, offset: 0, srvID: 3ef9f292-ccaf-40d5-9785-f2cbaf59d199, blockid: BP-1347490420-127.0.0.1-1456718019488:blk_1073741827_1003, duration: 48720148
2016-02-28 17:58:59,263 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1347490420-127.0.0.1-1456718019488:blk_1073741827_1003, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-02-28 17:59:02,490 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.17.25.18, datanodeUuid=3ef9f292-ccaf-40d5-9785-f2cbaf59d199, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-a74224c2-5c44-4cf0-b4eb-8bfb04f755cb;nsid=556380454;c=0) Starting thread to transfer BP-1347490420-127.0.0.1-1456718019488:blk_1073741827_1003 to 172.17.25.7:50010 
2016-02-28 17:59:02,492 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.17.25.18, datanodeUuid=3ef9f292-ccaf-40d5-9785-f2cbaf59d199, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-a74224c2-5c44-4cf0-b4eb-8bfb04f755cb;nsid=556380454;c=0):Failed to transfer BP-1347490420-127.0.0.1-1456718019488:blk_1073741827_1003 to 172.17.25.7:50010 got 
java.net.NoRouteToHostException: No route to host
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
    at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1981)
    at java.lang.Thread.run(Thread.java:745)
2016-02-28 17:59:02,494 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Starting CheckDiskError Thread
2016-02-28 17:59:08,451 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1347490420-127.0.0.1-1456718019488:blk_1073741827_1003
2016-02-28 18:02:20,916 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1347490420-127.0.0.1-1456718019488:blk_1073741834_1010 src: /172.17.25.4:59461 dest: /172.17.25.18:50010
2016-02-28 18:02:20,923 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Datanode 3 got response for connect ack  from downstream datanode with firstbadlink as 172.17.25.7:50010
2016-02-28 18:02:20,923 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Datanode 3 forwarding connect ack to upstream firstbadlink is 172.17.25.7:50010
2016-02-28 18:02:20,924 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1347490420-127.0.0.1-1456718019488:blk_1073741834_1010, type=HAS_DOWNSTREAM_IN_PIPELINE
java.io.EOFException: Premature EOF: no length prefix available
    at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2203)
    at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:176)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1164)
    at java.lang.Thread.run(Thread.java:745)
2016-02-28 18:02:20,937 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in BlockReceiver.run(): 
java.io.IOException: Connection reset by peer
    at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
    at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
    at sun.nio.ch.IOUtil.write(IOUtil.java:65)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
    at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
    at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
    at java.io.DataOutputStream.flush(DataOutputStream.java:123)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1388)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1327)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1248)
    at java.lang.Thread.run(Thread.java:745)
2016-02-28 18:02:20,937 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exception for BP-1347490420-127.0.0.1-1456718019488:blk_1073741834_1010
java.io.IOException: Premature EOF from inputStream
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
    at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
    at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
    at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:467)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:781)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:730)
    at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
    at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
    at java.lang.Thread.run(Thread.java:745)
2016-02-28 18:02:20,938 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1347490420-127.0.0.1-1456718019488:blk_1073741834_1010, type=HAS_DOWNSTREAM_IN_PIPELINE
java.io.IOException: Connection reset by peer
    at sun.nio.ch.FileDispatcherImpl.write0(Native Method)
    at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47)
    at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93)
    at sun.nio.ch.IOUtil.write(IOUtil.java:65)
    at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471)
    at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63)
    at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159)
    at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117)
    at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82)
    at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140)
    at java.io.DataOutputStream.flush(DataOutputStream.java:123)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1388)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1327)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1248)
    at java.lang.Thread.run(Thread.java:745)
2016-02-28 18:02:20,939 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1347490420-127.0.0.1-1456718019488:blk_1073741834_1010, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-02-28 18:02:20,939 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: opWriteBlock BP-1347490420-127.0.0.1-1456718019488:blk_1073741834_1010 received exception java.io.IOException: Premature EOF from inputStream
2016-02-28 18:02:20,939 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: hadoop-slave-1:50010:DataXceiver error processing WRITE_BLOCK operation  src: /172.17.25.4:59461 dst: /172.17.25.18:50010
java.io.IOException: Premature EOF from inputStream
    at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:194)
    at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
    at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
    at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:467)
    at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:781)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:730)
    at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
    at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
    at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:235)
    at java.lang.Thread.run(Thread.java:745)
2016-02-28 18:02:20,975 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1347490420-127.0.0.1-1456718019488:blk_1073741835_1011 src: /172.17.25.28:44647 dest: /172.17.25.18:50010
2016-02-28 18:02:20,993 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.17.25.28:44647, dest: /172.17.25.18:50010, bytes: 171, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_-966614823_1, offset: 0, srvID: 3ef9f292-ccaf-40d5-9785-f2cbaf59d199, blockid: BP-1347490420-127.0.0.1-1456718019488:blk_1073741835_1011, duration: 16169824
2016-02-28 18:02:20,993 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1347490420-127.0.0.1-1456718019488:blk_1073741835_1011, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-02-28 18:02:23,493 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Scheduling blk_1073741834_1010 file /home/hadoop1/hadoopdata/hdfs/datanode/current/BP-1347490420-127.0.0.1-1456718019488/current/rbw/blk_1073741834 for deletion
2016-02-28 18:02:23,495 INFO org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetAsyncDiskService: Deleted BP-1347490420-127.0.0.1-1456718019488 blk_1073741834_1010 file /home/hadoop1/hadoopdata/hdfs/datanode/current/BP-1347490420-127.0.0.1-1456718019488/current/rbw/blk_1073741834
2016-02-28 18:02:25,255 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1347490420-127.0.0.1-1456718019488:blk_1073741838_1014 src: /172.17.25.18:33872 dest: /172.17.25.18:50010
2016-02-28 18:02:25,274 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.17.25.18:33872, dest: /172.17.25.18:50010, bytes: 103877, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_451877552_1, offset: 0, srvID: 3ef9f292-ccaf-40d5-9785-f2cbaf59d199, blockid: BP-1347490420-127.0.0.1-1456718019488:blk_1073741838_1014, duration: 18382179
2016-02-28 18:02:25,274 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1347490420-127.0.0.1-1456718019488:blk_1073741838_1014, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-02-28 18:02:26,727 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1347490420-127.0.0.1-1456718019488:blk_1073741839_1015 src: /172.17.25.18:33873 dest: /172.17.25.18:50010
2016-02-28 18:02:26,731 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.17.25.18:33873, dest: /172.17.25.18:50010, bytes: 9441, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_451877552_1, offset: 0, srvID: 3ef9f292-ccaf-40d5-9785-f2cbaf59d199, blockid: BP-1347490420-127.0.0.1-1456718019488:blk_1073741839_1015, duration: 1203072
2016-02-28 18:02:26,731 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1347490420-127.0.0.1-1456718019488:blk_1073741839_1015, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-02-28 18:02:28,477 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1347490420-127.0.0.1-1456718019488:blk_1073741835_1011
2016-02-28 18:02:30,768 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1347490420-127.0.0.1-1456718019488:blk_1073741840_1016 src: /172.17.25.18:33878 dest: /172.17.25.18:50010
2016-02-28 18:02:30,788 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.17.25.18:33878, dest: /172.17.25.18:50010, bytes: 103877, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_497723051_1, offset: 0, srvID: 3ef9f292-ccaf-40d5-9785-f2cbaf59d199, blockid: BP-1347490420-127.0.0.1-1456718019488:blk_1073741840_1016, duration: 18595648
2016-02-28 18:02:30,788 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1347490420-127.0.0.1-1456718019488:blk_1073741840_1016, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-02-28 18:02:32,704 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Receiving BP-1347490420-127.0.0.1-1456718019488:blk_1073741841_1017 src: /172.17.25.18:33879 dest: /172.17.25.18:50010
2016-02-28 18:02:32,706 INFO org.apache.hadoop.hdfs.server.datanode.DataNode.clienttrace: src: /172.17.25.18:33879, dest: /172.17.25.18:50010, bytes: 9756, op: HDFS_WRITE, cliID: DFSClient_NONMAPREDUCE_497723051_1, offset: 0, srvID: 3ef9f292-ccaf-40d5-9785-f2cbaf59d199, blockid: BP-1347490420-127.0.0.1-1456718019488:blk_1073741841_1017, duration: 1179392
2016-02-28 18:02:32,706 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: PacketResponder: BP-1347490420-127.0.0.1-1456718019488:blk_1073741841_1017, type=LAST_IN_PIPELINE, downstreams=0:[] terminating
2016-02-28 18:02:33,489 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1347490420-127.0.0.1-1456718019488:blk_1073741838_1014
2016-02-28 18:02:33,490 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1347490420-127.0.0.1-1456718019488:blk_1073741839_1015
2016-02-28 18:02:35,496 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.17.25.18, datanodeUuid=3ef9f292-ccaf-40d5-9785-f2cbaf59d199, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-a74224c2-5c44-4cf0-b4eb-8bfb04f755cb;nsid=556380454;c=0) Starting thread to transfer BP-1347490420-127.0.0.1-1456718019488:blk_1073741835_1011 to 172.17.25.7:50010 
2016-02-28 18:02:35,497 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.17.25.18, datanodeUuid=3ef9f292-ccaf-40d5-9785-f2cbaf59d199, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-a74224c2-5c44-4cf0-b4eb-8bfb04f755cb;nsid=556380454;c=0):Failed to transfer BP-1347490420-127.0.0.1-1456718019488:blk_1073741835_1011 to 172.17.25.7:50010 got 
java.net.NoRouteToHostException: No route to host
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
    at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1981)
    at java.lang.Thread.run(Thread.java:745)
2016-02-28 18:02:38,492 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1347490420-127.0.0.1-1456718019488:blk_1073741841_1017
2016-02-28 18:02:38,494 INFO org.apache.hadoop.hdfs.server.datanode.BlockPoolSliceScanner: Verification succeeded for BP-1347490420-127.0.0.1-1456718019488:blk_1073741840_1016
2016-02-28 18:02:41,491 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.17.25.18, datanodeUuid=3ef9f292-ccaf-40d5-9785-f2cbaf59d199, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-a74224c2-5c44-4cf0-b4eb-8bfb04f755cb;nsid=556380454;c=0) Starting thread to transfer BP-1347490420-127.0.0.1-1456718019488:blk_1073741835_1011 to 172.17.25.7:50010 
2016-02-28 18:02:41,492 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.17.25.18, datanodeUuid=3ef9f292-ccaf-40d5-9785-f2cbaf59d199, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-a74224c2-5c44-4cf0-b4eb-8bfb04f755cb;nsid=556380454;c=0):Failed to transfer BP-1347490420-127.0.0.1-1456718019488:blk_1073741835_1011 to 172.17.25.7:50010 got 
java.net.NoRouteToHostException: No route to host
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
    at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1981)
    at java.lang.Thread.run(Thread.java:745)
2016-02-28 18:02:53,492 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.17.25.18, datanodeUuid=3ef9f292-ccaf-40d5-9785-f2cbaf59d199, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-a74224c2-5c44-4cf0-b4eb-8bfb04f755cb;nsid=556380454;c=0) Starting thread to transfer BP-1347490420-127.0.0.1-1456718019488:blk_1073741835_1011 to 172.17.25.7:50010 
2016-02-28 18:02:53,493 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(172.17.25.18, datanodeUuid=3ef9f292-ccaf-40d5-9785-f2cbaf59d199, infoPort=50075, ipcPort=50020, storageInfo=lv=-56;cid=CID-a74224c2-5c44-4cf0-b4eb-8bfb04f755cb;nsid=556380454;c=0):Failed to transfer BP-1347490420-127.0.0.1-1456718019488:blk_1073741835_1011 to 172.17.25.7:50010 got 
java.net.NoRouteToHostException: No route to host
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:717)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
    at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1981)
    at java.lang.Thread.run(Thread.java:745)
2016-02-28 18:03:29,492 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: IOException in offerService
java.io.EOFException: End of File Exception between local host is: "hadoop-slave-1/172.17.25.18"; destination host is: "hadoop-master":9000; : java.io.EOFException; For more details see:  http://wiki.apache.org/hadoop/EOFException
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:422)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)
    at org.apache.hadoop.ipc.Client.call(Client.java:1472)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy12.sendHeartbeat(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.DatanodeProtocolClientSideTranslatorPB.sendHeartbeat(DatanodeProtocolClientSideTranslatorPB.java:139)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.sendHeartBeat(BPServiceActor.java:582)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.offerService(BPServiceActor.java:680)
    at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:850)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.io.EOFException
    at java.io.DataInputStream.readInt(DataInputStream.java:392)
    at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1071)
    at org.apache.hadoop.ipc.Client$Connection.run(Client.java:966)
2016-02-28 18:03:32,636 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: RECEIVED SIGNAL 15: SIGTERM
2016-02-28 18:03:32,638 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at hadoop-slave-1/172.17.25.18
************************************************************/
5 REPLIES 5
Highlighted

Re: MrBench not running on our hadoop multi node cluster

Mentor
@Kumar Sanyam

I would check your logs for datanode, namenode and mapred, you might be having network issues. Please also run some other job and let us know whether you're having issues with just particularly this job or other jobs also having problems.

Highlighted

Re: MrBench not running on our hadoop multi node cluster

Explorer

I've given the log files for the datanode..

Highlighted

Re: MrBench not running on our hadoop multi node cluster

Mentor

Is your firewall on and blocking some nodes? Do you have SELinux on, please disable it. Check your datanode heap si,e, you might need to bump it. Also look at the following and task the property https://www.quora.com/How-should-one-solve-Bad-connect-ack-with-firstBadLink-DataNode-problem-in-Had...

Run netstat and check for closewait conditions. All that said I do see you are running hadoop 2.6, is that an Apache release? I highly recommend upgrading to 2.6.3 that was out recently as there are many fixes since .0. Also, please consider using Ambari and HDP, Ambari has smart configs where issues like yours will be brought to your attention and can be addressed quicker. There's also alerting and latest Hadoop, we also include industry best practices for default settings. Another feature you may consider is SmartSense that will alert you of any misconfiguration proactively.

Highlighted

Re: MrBench not running on our hadoop multi node cluster

Explorer

While i was running terasort the job got stuck at map 100% and reduce 0%. it doesn't go further this part. Only wordcount runs properly.

Highlighted

Re: MrBench not running on our hadoop multi node cluster

@Kumar Sanyam

See this .. I have experienced this in the past and these notes helped me to resolve the issue

PROBLEM Various operations on HDFS files fail, such as simple "hdfs dfs -put" operations, with error messages such as

INFO hdfs.DFSClient: Exception in createBlockOutputStream java.io.IOException: Bad connect ack with firstBadLink as 172.28.43.14:50010

On medium to large cluster, the failures may indicate random worker nodes each time.

ROOT CAUSE There can be many causes for this type of failure including * restrictive firewall rules in the network or on the worker nodes * network bandwidth issues * newly instituted security policies * insufficient file descriptors on the worker nodes (Hortonworks recommends 10,000 or more for service accounts such as user "hdfs")

It is important to rule out non-hadoop related causes such as the above. Once that is done, further research and cluster tuning may be required.

The issue may be caused by datanode configuration issues. Specifically, if the datanode runs out of transfer threads, the datanodes may be unable to process requests, either intermittently or for extended periods. If transfer threads are an issue, one will find error messages similar to the following in the datanode log files:

Xceiver count 1025 exceeds the limit of concurrent xcievers: 1024

RESOLUTION

To resolve this issue, increase the value of this property in the configuration file hdfs-site.xml. If the cluster is managed via Ambari, change the property in Ambari > Services > HDFS > Configs

dfs.datanode.max.transfer.threads

A typical minimum value for this property is 4096. If that is insufficient, increase the value to 8192 or more. We do not recommend a value greater than 16k.

After changing the value, restart the datanodes (a rolling restart can be performed to minimize operational impact) to ensure the new value takes effect. The property is read by the datanode daemon only at startup.

Don't have an account?
Coming from Hortonworks? Activate your account here