2017-03-25 09:03:32,612 ERROR datanode.DataNode (DataXceiver.java:run(278)) - node06.domain.com:50010:DataXceiver error processing WRITE_BLOCK operation src: /ip.18:57485 dst: /ip.16:50010 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:04:29,592 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 33389ms GC pool 'ParNew' had collection(s): count=1 time=0ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=33441ms 2017-03-25 09:05:01,365 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 31273ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=31392ms 2017-03-25 09:05:49,079 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: 8ff7da80d8d00e712026cda920a30e3f, slotIdx: 85, srvID: 3c14f6e4-7181-4f83-b48f-a1eb4d494aaa, success: true 2017-03-25 09:05:49,079 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 47213ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=47366ms 2017-03-25 09:06:18,420 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 28840ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=28968ms 2017-03-25 09:07:05,530 INFO datanode.DataNode (BlockReceiver.java:packetSentInTime(378)) - A packet was last sent 76126 milliseconds ago. 2017-03-25 09:07:05,530 INFO datanode.DataNode (BlockReceiver.java:packetSentInTime(378)) - A packet was last sent 76365 milliseconds ago. 2017-03-25 09:07:05,530 WARN datanode.DataNode (BlockReceiver.java:run(1347)) - The downstream error might be due to congestion in upstream including this node. Propagating the error: java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2293) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1286) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:07:05,531 WARN datanode.DataNode (BlockReceiver.java:run(1391)) - IOException in BlockReceiver.run(): java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2293) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1286) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:07:05,531 INFO datanode.DataNode (BlockReceiver.java:run(1394)) - PacketResponder: BP-1426797840-ip.11-1461158403571:blk_1381610917_307892855, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[ip.20:50010] java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2293) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1286) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:07:05,530 WARN datanode.DataNode (BlockReceiver.java:run(1347)) - The downstream error might be due to congestion in upstream including this node. Propagating the error: java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2293) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1286) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:07:05,532 WARN datanode.DataNode (BlockReceiver.java:run(1391)) - IOException in BlockReceiver.run(): java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2293) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1286) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:07:05,532 INFO datanode.DataNode (BlockReceiver.java:run(1408)) - PacketResponder: BP-1426797840-ip.11-1461158403571:blk_1381610917_307892855, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[ip.20:50010] terminating 2017-03-25 09:07:05,532 INFO datanode.DataNode (BlockReceiver.java:handleMirrorOutError(433)) - DatanodeRegistration(ip.16:50010, datanodeUuid=3c14f6e4-7181-4f83-b48f-a1eb4d494aaa, infoPort=50075, infoSecurePort=0, ipcPort=8010, storageInfo=lv=-56;cid=CID-83308e38-3630-4069-b6fb-35540367ec96;nsid=804038874;c=0):Exception writing BP-1426797840-ip.11-1461158403571:blk_1381610917_307892855 to mirror ip.20:50010 java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:560) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:07:05,531 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 46610ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=46885ms 2017-03-25 09:07:05,532 INFO datanode.DataNode (BlockReceiver.java:receiveBlock(937)) - Exception for BP-1426797840-ip.11-1461158403571:blk_1381610917_307892855 java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:560) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:07:05,532 INFO datanode.DataNode (BlockReceiver.java:run(1394)) - PacketResponder: BP-1426797840-ip.11-1461158403571:blk_1381620674_307902615, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[ip.17:50010] java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2293) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1286) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:07:05,532 INFO datanode.DataNode (DataXceiver.java:writeBlock(839)) - opWriteBlock BP-1426797840-ip.11-1461158403571:blk_1381610917_307892855 received exception java.io.IOException: Broken pipe 2017-03-25 09:07:05,533 INFO datanode.DataNode (BlockReceiver.java:run(1408)) - PacketResponder: BP-1426797840-ip.11-1461158403571:blk_1381620674_307902615, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[ip.17:50010] terminating 2017-03-25 09:07:05,533 ERROR datanode.DataNode (DataXceiver.java:run(278)) - node06.domain.com:50010:DataXceiver error processing WRITE_BLOCK operation src: /ip.17:33778 dst: /ip.16:50010 java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:560) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:07:05,533 INFO datanode.DataNode (BlockReceiver.java:receiveBlock(937)) - Exception for BP-1426797840-ip.11-1461158403571:blk_1381620674_307902615 java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:269) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.adjustCrcChannelPosition(FsDatasetImpl.java:1548) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.adjustCrcFilePosition(BlockReceiver.java:1034) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:711) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:07:05,533 INFO datanode.DataNode (BlockReceiver.java:receiveBlock(937)) - Exception for BP-1426797840-ip.11-1461158403571:blk_1381620674_307902615 java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:269) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.adjustCrcChannelPosition(FsDatasetImpl.java:1548) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.adjustCrcFilePosition(BlockReceiver.java:1034) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:711) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:07:05,534 INFO datanode.DataNode (DataXceiver.java:writeBlock(839)) - opWriteBlock BP-1426797840-ip.11-1461158403571:blk_1381620674_307902615 received exception java.nio.channels.ClosedByInterruptException 2017-03-25 09:07:05,534 ERROR datanode.DataNode (DataXceiver.java:run(278)) - node06.domain.com:50010:DataXceiver error processing WRITE_BLOCK operation src: /ip.15:33299 dst: /ip.16:50010 java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.FileChannelImpl.position(FileChannelImpl.java:269) at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.adjustCrcChannelPosition(FsDatasetImpl.java:1548) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.adjustCrcFilePosition(BlockReceiver.java:1034) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:711) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:07:34,286 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 28254ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=28552ms 2017-03-25 09:08:21,750 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: 1d2e6c9273e4f8515549c4baec7851d8, slotIdx: 0, srvID: 3c14f6e4-7181-4f83-b48f-a1eb4d494aaa, success: true 2017-03-25 09:08:21,752 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 46964ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=47254ms 2017-03-25 09:08:21,754 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: 1d2e6c9273e4f8515549c4baec7851d8, slotIdx: 1, srvID: 3c14f6e4-7181-4f83-b48f-a1eb4d494aaa, success: true 2017-03-25 09:08:21,754 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: 1d2e6c9273e4f8515549c4baec7851d8, slotIdx: 2, srvID: 3c14f6e4-7181-4f83-b48f-a1eb4d494aaa, success: true 2017-03-25 09:08:21,754 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: 1d2e6c9273e4f8515549c4baec7851d8, slotIdx: 3, srvID: 3c14f6e4-7181-4f83-b48f-a1eb4d494aaa, success: true 2017-03-25 09:08:51,450 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 29197ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=29490ms 2017-03-25 09:09:35,167 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 43215ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=43480ms 2017-03-25 09:09:35,167 INFO datanode.DataNode (BlockReceiver.java:receiveBlock(937)) - Exception for BP-1426797840-ip.11-1461158403571:blk_1381622868_307904810 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:09:35,168 INFO datanode.DataNode (BlockReceiver.java:run(1372)) - PacketResponder: BP-1426797840-ip.11-1461158403571:blk_1381622868_307904810, type=LAST_IN_PIPELINE: Thread is interrupted. 2017-03-25 09:09:35,168 INFO datanode.DataNode (BlockReceiver.java:run(1408)) - PacketResponder: BP-1426797840-ip.11-1461158403571:blk_1381622868_307904810, type=LAST_IN_PIPELINE terminating 2017-03-25 09:09:35,168 INFO datanode.DataNode (DataXceiver.java:writeBlock(839)) - opWriteBlock BP-1426797840-ip.11-1461158403571:blk_1381622868_307904810 received exception java.io.IOException: Premature EOF from inputStream 2017-03-25 09:09:35,168 ERROR datanode.DataNode (DataXceiver.java:run(278)) - node06.domain.com:50010:DataXceiver error processing WRITE_BLOCK operation src: /ip.20:47678 dst: /ip.16:50010 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:10:03,871 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 28204ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=28497ms 2017-03-25 09:10:48,879 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 44507ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=44799ms 2017-03-25 09:10:48,880 WARN nio.NioEventLoop (Slf4JLogger.java:warn(136)) - Selector.select() returned prematurely 512 times in a row; rebuilding selector. 2017-03-25 09:10:48,882 INFO nio.NioEventLoop (Slf4JLogger.java:info(101)) - Migrated 0 channel(s) to the new Selector. 2017-03-25 09:11:17,425 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 28045ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=28338ms 2017-03-25 09:12:03,745 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 45819ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=46107ms 2017-03-25 09:12:29,684 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 25438ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=25689ms 2017-03-25 09:13:13,018 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 42833ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=43126ms 2017-03-25 09:13:38,916 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 25397ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=25685ms 2017-03-25 09:14:22,123 INFO datanode.DataNode (BlockReceiver.java:packetSentInTime(378)) - A packet was last sent 69107 milliseconds ago. 2017-03-25 09:14:22,125 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 42708ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=43001ms 2017-03-25 09:14:22,126 INFO datanode.DataNode (BlockReceiver.java:handleMirrorOutError(433)) - DatanodeRegistration(ip.16:50010, datanodeUuid=3c14f6e4-7181-4f83-b48f-a1eb4d494aaa, infoPort=50075, infoSecurePort=0, ipcPort=8010, storageInfo=lv=-56;cid=CID-83308e38-3630-4069-b6fb-35540367ec96;nsid=804038874;c=0):Exception writing BP-1426797840-ip.11-1461158403571:blk_1381564367_307915504 to mirror ip.21:50010 java.io.IOException: Broken pipe at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:560) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:14:22,125 WARN datanode.DataNode (BlockReceiver.java:run(1347)) - The downstream error might be due to congestion in upstream including this node. Propagating the error: java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2293) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1286) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:14:22,126 WARN datanode.DataNode (BlockReceiver.java:run(1391)) - IOException in BlockReceiver.run(): java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2293) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1286) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:14:22,126 INFO datanode.DataNode (BlockReceiver.java:run(1394)) - PacketResponder: BP-1426797840-ip.11-1461158403571:blk_1381564367_307915504, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[ip.21:50010] java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2293) at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:244) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1286) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:14:22,126 INFO datanode.DataNode (BlockReceiver.java:run(1408)) - PacketResponder: BP-1426797840-ip.11-1461158403571:blk_1381564367_307915504, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=1:[ip.21:50010] terminating 2017-03-25 09:14:22,126 INFO datanode.DataNode (BlockReceiver.java:receiveBlock(937)) - Exception for BP-1426797840-ip.11-1461158403571:blk_1381564367_307915504 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=/ip.16:50010 remote=/ip.16:39957]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:14:22,128 INFO datanode.DataNode (DataXceiver.java:writeBlock(839)) - opWriteBlock BP-1426797840-ip.11-1461158403571:blk_1381564367_307915504 received exception java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=/ip.16:50010 remote=/ip.16:39957]. 60000 millis timeout left. 2017-03-25 09:14:22,128 ERROR datanode.DataNode (DataXceiver.java:run(278)) - node06.domain.com:50010:DataXceiver error processing WRITE_BLOCK operation src: /ip.16:39957 dst: /ip.16:50010 java.io.InterruptedIOException: Interrupted while waiting for IO on channel java.nio.channels.SocketChannel[connected local=/ip.16:50010 remote=/ip.16:39957]. 60000 millis timeout left. at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:342) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.fill(BufferedInputStream.java:246) at java.io.BufferedInputStream.read1(BufferedInputStream.java:286) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:16:02,152 ERROR datanode.DataNode (DataXceiver.java:run(278)) - node06.domain.com:50010:DataXceiver error processing WRITE_BLOCK operation src: /ip.21:39166 dst: /ip.16:50010 java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method) at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at sun.nio.ch.IOUtil.read(IOUtil.java:197) at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161) at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131) at java.io.BufferedInputStream.read1(BufferedInputStream.java:284) at java.io.BufferedInputStream.read(BufferedInputStream.java:345) at java.io.DataInputStream.read(DataInputStream.java:149) at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:199) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:171) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:16:46,799 INFO datanode.DataNode (BlockReceiver.java:receiveBlock(937)) - Exception for BP-1426797840-ip.11-1461158403571:blk_1381595860_307877798 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:898) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:806) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) at java.lang.Thread.run(Thread.java:745) 2017-03-25 09:16:46,800 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 44271ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=44532ms 2017-03-25 09:16:46,801 INFO datanode.DataNode (BlockReceiver.java:run(1372)) - PacketResponder: BP-1426797840-ip.11-1461158403571:blk_1381595860_307877798, type=LAST_IN_PIPELINE: Thread is interrupted. 2017-03-25 10:06:57,744 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: 8c435f819b845612e8ad43dbde9ecd6d, slotIdx: 22, srvID: 3c14f6e4-7181-4f83-b48f-a1eb4d494aaa, success: true 2017-03-25 10:06:57,744 WARN datanode.DataNode (DataXceiver.java:checkAccess(1313)) - Block token verification failed: op=READ_BLOCK, remoteAddress=/ip.16:44327, message=Block token with block_token_identifier (expiryDate=1490423302545, keyId=474606430, userId=hbase, blockPoolId=BP-1426797840-ip.11-1461158403571, blockId=1381293071, access modes=[READ]) is expired. 2017-03-25 10:06:57,745 ERROR datanode.DataNode (DataXceiver.java:run(278)) - node06.domain.com:50010:DataXceiver error processing READ_BLOCK operation src: /ip.16:44327 dst: /ip.16:50010 org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with block_token_identifier (expiryDate=1490423302545, keyId=474606430, userId=hbase, blockPoolId=BP-1426797840-ip.11-1461158403571, blockId=1381293071, access modes=[READ]) is expired. at org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280) at org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:301) at org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:97) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1298) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:519) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) at java.lang.Thread.run(Thread.java:745) 2017-03-25 10:07:49,307 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 51063ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=51353ms 2017-03-25 10:08:16,962 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 27154ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=27326ms 2017-03-25 10:08:16,970 WARN datanode.DataNode (DataXceiver.java:checkAccess(1313)) - Block token verification failed: op=READ_BLOCK, remoteAddress=/ip.16:44825, message=Block token with block_token_identifier (expiryDate=1490420151001, keyId=474606430, userId=hbase, blockPoolId=BP-1426797840-ip.11-1461158403571, blockId=1381245135, access modes=[READ]) is expired. 2017-03-25 10:08:16,970 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: e2e1f4d3e3322c62744ddb9fc94f07f6, slotIdx: 5, srvID: 3c14f6e4-7181-4f83-b48f-a1eb4d494aaa, success: true 2017-03-25 10:08:16,970 ERROR datanode.DataNode (DataXceiver.java:run(278)) - node06.domain.com:50010:DataXceiver error processing READ_BLOCK operation src: /ip.16:44825 dst: /ip.16:50010 org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with block_token_identifier (expiryDate=1490420151001, keyId=474606430, userId=hbase, blockPoolId=BP-1426797840-ip.11-1461158403571, blockId=1381245135, access modes=[READ]) is expired. at org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280) at org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:301) at org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:97) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1298) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:519) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:116) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:71) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) at java.lang.Thread.run(Thread.java:745) 2017-03-25 10:09:05,867 INFO DataNode.clienttrace (DataXceiver.java:releaseShortCircuitFds(407)) - src: 127.0.0.1, dest: 127.0.0.1, op: RELEASE_SHORT_CIRCUIT_FDS, shmId: 80c3e0808d97e873a530fbc6607db29d, slotIdx: 22, srvID: 3c14f6e4-7181-4f83-b48f-a1eb4d494aaa, success: true 2017-03-25 10:09:05,867 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 48405ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=48577ms 2017-03-25 10:09:33,596 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 27228ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=27524ms 2017-03-25 10:09:33,597 WARN nio.NioEventLoop (Slf4JLogger.java:warn(136)) - Selector.select() returned prematurely 512 times in a row; rebuilding selector. 2017-03-25 10:09:33,597 WARN nio.NioEventLoop (Slf4JLogger.java:warn(136)) - Selector.select() returned prematurely 512 times in a row; rebuilding selector. 2017-03-25 10:09:33,597 INFO nio.NioEventLoop (Slf4JLogger.java:info(101)) - Migrated 0 channel(s) to the new Selector. 2017-03-25 10:09:33,597 INFO nio.NioEventLoop (Slf4JLogger.java:info(101)) - Migrated 0 channel(s) to the new Selector. 2017-03-25 10:10:18,219 WARN util.JvmPauseMonitor (JvmPauseMonitor.java:run(192)) - Detected pause in JVM or host machine (eg GC): pause of approximately 44122ms GC pool 'ConcurrentMarkSweep' had collection(s): count=1 time=44419ms 2017-03-25 10:10:38,910 INFO datanode.DataNode (LogAdapter.java:info(47)) - STARTUP_MSG: /************************************************************ STARTUP_MSG: Starting DataNode STARTUP_MSG: user = hdfs