Member since
01-11-2017
1
Post
0
Kudos Received
0
Solutions
01-11-2017
02:39 PM
We are trying to create new partitioned and bucketed(1000) table from existing partition table of size 750GB. Mappers are completed successfully, but during the reduce phase, we are getting below error and the reduce are getting failed. We can see in DataNode Process and DataNode Web UI have alerts that connection not responding in Amabri? 2017-01-10 16:45:30,869 [INFO] [Thread-90] |hdfs.DFSClient|: Exception in createBlockOutputStream
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223)
at sun.nio.ch.IOUtil.read(IOUtil.java:197)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:379)
at org.apache.hadoop.net.SocketInputStream$Reader.performIO(SocketInputStream.java:57)
at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)
at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at java.io.FilterInputStream.read(FilterInputStream.java:83)
at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2291)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1376)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1295)
at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:463)
2017-01-10 16:45:30,872 [INFO] [Thread-90] |hdfs.DFSClient|: Abandoning BP-1974649974--1481732158525:blk_1077768941_4031650
2017-01-10 16:45:30,891 [INFO] [Thread-90] |hdfs.DFSClient|: Excluding datanode DatanodeInfoWithStorage[:50010,DS-5dc52f7c-4497-457f-afca-c36a24b4f849,DISK] Execution Engine=Tez. Any help would be greatly appreciated.
... View more
Labels: