Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

DataXceiver error processing WRITE_BLOCK operation + java.io.IOException: Premature EOF from inputStream

avatar

We are using HDP-2.3.4.0. When we injest data in hdfs using 10 flume agents all the datanodes start to log the following error messages after 10-15 minutes

2016-07-19 18:19:31,144 INFO  datanode.DataNode (DataXceiver.java:writeBlock(655)) - Receiving BP-1264119021-192.168.2.1-1454492758635:blk_1074098375_358091 src: /192.168.2.8:36648 dest: /192.168.2.16:1019
2016-07-19 18:19:31,211 INFO  datanode.DataNode (DataXceiver.java:writeBlock(655)) - Receiving BP-1264119021-192.168.2.1-1454492758635:blk_1074098376_358092 src: /192.168.2.8:36649 dest: /192.168.2.16:1019
2016-07-19 18:19:31,298 INFO  datanode.DataNode (DataXceiver.java:writeBlock(655)) - Receiving BP-1264119021-192.168.2.1-1454492758635:blk_1074098377_358093 src: /192.168.2.8:36650 dest: /192.168.2.16:1019
2016-07-19 18:19:31,553 INFO  datanode.DataNode (DataXceiver.java:writeBlock(655)) - Receiving BP-1264119021-192.168.2.1-1454492758635:blk_1074098378_358094 src: /192.168.2.8:36651 dest: /192.168.2.16:1019
2016-07-19 18:19:31,597 INFO  datanode.DataNode (DataXceiver.java:writeBlock(655)) - Receiving BP-1264119021-192.168.2.1-1454492758635:blk_1074098379_358095 src: /192.168.2.8:36652 dest: /192.168.2.16:1019
2016-07-19 18:19:31,946 INFO  datanode.DataNode (DataXceiver.java:writeBlock(655)) - Receiving BP-1264119021-192.168.2.1-1454492758635:blk_1074098381_358097 src: /192.168.2.11:42313 dest: /192.168.2.16:1019
2016-07-19 18:19:33,134 INFO  datanode.DataNode (DataXceiver.java:writeBlock(655)) - Receiving BP-1264119021-192.168.2.1-1454492758635:blk_1074098385_358101 src: /192.168.2.6:53766 dest: /192.168.2.16:1019
2016-07-19 18:19:33,153 INFO  datanode.DataNode (BlockReceiver.java:receiveBlock(934)) - Exception for BP-1264119021-192.168.2.1-1454492758635:blk_1074098385_358101
java.io.IOException: Premature EOF from inputStream
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:895)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:807)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
        at java.lang.Thread.run(Thread.java:745)
2016-07-19 18:19:33,154 INFO  datanode.DataNode (BlockReceiver.java:run(1369)) - PacketResponder: BP-1264119021-192.168.2.1-1454492758635:blk_1074098385_358101, type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2016-07-19 18:19:33,154 INFO  datanode.DataNode (BlockReceiver.java:run(1405)) - PacketResponder: BP-1264119021-192.168.2.1-1454492758635:blk_1074098385_358101, type=HAS_DOWNSTREAM_IN_PIPELINE terminating
2016-07-19 18:19:33,155 INFO  datanode.DataNode (DataXceiver.java:writeBlock(840)) - opWriteBlock BP-1264119021-192.168.2.1-1454492758635:blk_1074098385_358101 received exception java.io.IOException: Premature EOF from inputStream
2016-07-19 18:19:33,155 ERROR datanode.DataNode (DataXceiver.java:run(278)) - socsds018rm001.sods.local:1019:DataXceiver error processing WRITE_BLOCK operation  src: /192.168.2.6:53766 dst: /192.168.2.16:1019
java.io.IOException: Premature EOF from inputStream
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:895)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:807)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
        at java.lang.Thread.run(Thread.java:745)
2016-07-19 18:19:33,472 INFO  datanode.DataNode (DataXceiver.java:writeBlock(655)) - Receiving BP-1264119021-192.168.2.1-1454492758635:blk_1074098386_358102 src: /192.168.2.5:53758 dest: /192.168.2.16:1019
2016-07-19 18:19:33,489 INFO  datanode.DataNode (BlockReceiver.java:receiveBlock(934)) - Exception for BP-1264119021-192.168.2.1-1454492758635:blk_1074098386_358102
java.io.IOException: Premature EOF from inputStream
        at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134)
        at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:501)
        at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:895)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:807)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137)
        at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74)
        at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251)
        at java.lang.Thread.run(Thread.java:745)
2016-07-19 18:19:33,490 INFO  datanode.DataNode (BlockReceiver.java:run(1369)) - PacketResponder: BP-1264119021-192.168.2.1-1454492758635:blk_1074098386_358102, type=HAS_DOWNSTREAM_IN_PIPELINE: Thread is interrupted.
2016-07-19 18:19:33,490 INFO  datanode.DataNode (BlockReceiver.java:run(1405)) - PacketResponder: BP-1264119021-192.168.2.1-1454492758635:blk_1074098386_358102, type=HAS_DOWNSTREAM_IN_PIPELINE terminating

No obvious errors in the namenode logs

1 ACCEPTED SOLUTION

avatar

@Ettore Caprella

This is the harmless message you can ignore it. which will be addressed in 2.3.0 version of ambari. Please see:https://issues.apache.org/jira/browse/AMBARI-12420

The DataNode code has been changed in 2.3.0 Ambari so that it would stop logging the EOFException if a client connected to the data transfer port and immediately closed before sending any data.

Link

View solution in original post

6 REPLIES 6

avatar

@Ettore Caprella

This is the harmless message you can ignore it. which will be addressed in 2.3.0 version of ambari. Please see:https://issues.apache.org/jira/browse/AMBARI-12420

The DataNode code has been changed in 2.3.0 Ambari so that it would stop logging the EOFException if a client connected to the data transfer port and immediately closed before sending any data.

Link

avatar

Thanks @SBandaru.

Just to share my experience. The problem was on the flume side.

The flume agents went in OutOfMemoryError (unable to create new native thread) and the impact on the hdfs had been the error posted above. So I think there is a relation between the two errors but I agree with @SBandaru we can ignore the error message in the datanode logs.

Ciao

Ettore

avatar
Contributor

Getting this error with keberos enabled hadoop while copying file using copyFromLocal:

2017-05-22 17:15:25,294 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: infoobjects-Latitude-3550:1025:DataXceiver error processing unknown operation src: /127.0.0.1:35436 dst: /127.0.0.1:1025 java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2207) at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.DataTransferSaslUtil.readSaslMessageAndNegotiationCipherOptions(DataTransferSaslUtil.java:233) at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.doSaslHandshake(SaslDataTransferServer.java:369) at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.getSaslStreams(SaslDataTransferServer.java:297) at org.apache.hadoop.hdfs.protocol.datatransfer.sasl.SaslDataTransferServer.receive(SaslDataTransferServer.java:124) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:185) at java.lang.Thread.run(Thread.java:745)

Getting this error while using copyFromLocal <filename>.

Any help will be appreciated.

Here is my hdfs-site.xml:

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at
    http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<!-- Put site-specific property overrides in this file. -->
<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
        <!-- Default is 1 -->
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:///home/priyanshu/hadoop_data/hdfs/namenode</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:///home/priyanshu/hadoop_data/hdfs/datanode</value>
    </property>
    <!-- NameNode security config -->
    <property>
        <name>dfs.namenode.keytab.file</name>
        <value>/home/priyanshu/hadoop/zookeeper/conf/zkpr.keytab</value>
        <!-- path to the HDFS keytab -->
    </property>
    <property>
        <name>dfs.namenode.kerberos.principal</name>
        <value>zookeeper/localhost@EXAMPLE.COM</value>
    </property>
    <property>
        <name>dfs.datanode.keytab.file</name>
        <value>/home/priyanshu/hadoop/zookeeper/conf/zkpr.keytab</value>
        <!-- path to the HDFS keytab -->
    </property>
    <property>
        <name>dfs.datanode.kerberos.principal</name>
        <value>zookeeper/localhost@EXAMPLE.COM</value>
    </property>
    <!---Secondary NameNode config-->
    <property>
        <name>dfs.secondary.namenode.keytab.file</name>
        <value>/home/priyanshu/hadoop/zookeeper/conf/zkpr.keytab</value>
    </property>
    <property>
        <name>dfs.secondary.namenode.kerberos.principal</name>
        <value>zookeeper/localhost@EXAMPLE.COM</value>
    </property>
    <!---DataNode config-->
    <property>
        <name>dfs.datanode.address</name>
        <value>0.0.0.0:1025</value>
    </property>
    <property>
        <name>dfs.datanode.http.address</name>
        <value>0.0.0.0:1027</value>
    </property>
    <property>
        <name>dfs.data.transfer.protection</name>
        <value>authentication</value>
    </property>
    <property>
        <name>dfs.webhdfs.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.http.policy</name>
        <value>HTTPS_ONLY</value>
    </property>
    <property>
        <name>dfs.web.authentication.kerberos.principal</name>
        <value>zookeeper/localhost@EXAMPLE.COM</value>
    </property>
    <property>
        <name>dfs.web.authentication.kerberos.keytab</name>
        <value>/home/priyanshu/hadoop/zookeeper/conf/zkpr.keytab</value>
        <!-- path to the HTTP keytab -->
    </property>
    <property>
        <name>dfs.namenode.kerberos.internal.spnego.principal</name>
        <value>${dfs.web.authentication.kerberos.principal}</value>
    </property>
    <property>
        <name>dfs.secondary.namenode.kerberos.internal.spnego.principal</name>
        <value>>${dfs.web.authentication.kerberos.principal}</value>
    </property>
</configuration>

avatar

I think your error is related to some kerberos issue.

Is your ticket valid during all the copyFromLocal time span?

avatar
Contributor

here is the output for validity:

It seems to be valid as per IST.

klist
Ticket cache: FILE:/tmp/krb5cc_1001
Default principal: zookeeper/localhost@EXAMPLE.COM
Valid starting       Expires              Service principal
2017-05-22T18:40:52  2017-05-23T04:40:52  krbtgt/EXAMPLE.COM@EXAMPLE.COM
renew until 2017-05-29T18:40:52

avatar

Do you use zookeeper user in order to write data on hdfs?

Has the zookeeper user all the rights for that?