Support Questions

Find answers, ask questions, and share your expertise

ERROR datanode.DataNode + error processing WRITE_BLOCK operation on datanode logs

avatar

hi all,

from the datanodes machines ( we have 8 datanodes machine in the ambari cluster )

we can see the following errors

we check the DNS of the hostname's and the resolving of the IP's and they are ok

any suggestion what else need to check here ?

2018-12-02 16:41:59,608 ERROR datanode.DataNode (DataXceiver.java:run(278)) - DATANODE01.sys54.com:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.23.12.179:39418 dst: /192.23.12.179:50010
2018-12-02 16:41:59,609 ERROR datanode.DataNode (DataXceiver.java:run(278)) - DATANODE01.sys54.com:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.23.12.179:39664 dst: /192.23.12.179:50010
2018-12-02 16:42:24,018 ERROR datanode.DataNode (DataXceiver.java:writeBlock(787)) - DataNode{data=FSDataset{dirpath='[/grid/sdb/hadoop/hdfs/data/current, /grid/sdc/hadoop/hdfs/data/current, /grid/sdd/hadoop/hdfs/data/current, /grid/sde/hadoop/hdfs/data/current, /grid/sdf/hadoop/hdfs/data/current]'}, localName='DATANODE01.sys54.com:50010', datanodeUuid='83024a74-8fa4-4cc4-ad09-82c5b065f8ad', xmitsInProgress=0}:Exception transfering block BP-1378391652-192.23.12.165-1531291408940:blk_1203178897_129440081 to mirror 192.23.12.181:50010: java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/192.23.12.179:35834 remote=/192.23.12.181:50010]
2018-12-02 16:42:24,018 ERROR datanode.DataNode (DataXceiver.java:run(278)) - DATANODE01.sys54.com:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.23.12.180:34342 dst: /192.23.12.179:50010
2018-12-02 16:42:30,637 ERROR datanode.DataNode (DataXceiver.java:run(278)) - DATANODE01.sys54.com:50010:DataXceiver error processing WRITE_BLOCK operation  src: /192.23.12.179:38120 dst: /192.23.12.179:50010
Michael-Bronson
4 REPLIES 4

avatar
Master Collaborator

@Michael Bronson

This is the harmless message you can ignore it. which will be addressed in 2.3.0 version of ambari. Please see:https://issues.apache.org/jira/browse/AMBARI-12420

The DataNode code has been changed in 2.3.0 Ambari so that it would stop logging the EOFException if a client connected to the data transfer port and immediately closed before sending any data.

Link

avatar

we have ambari version - 2.6.1.0 ( in which HDP version ambari - 2.3.0 located ? )

rpm -qa | grep ambari
ambari-agent-2.6.1.0-143.x86_64
ambari-metrics-monitor-2.6.1.0-143.x86_64
ambari-metrics-collector-2.5.0.3-7.x86_64
ambari-metrics-hadoop-sink-2.6.1.0-143.x86_64
ambari-server-2.6.1.0-143.x86_64
Michael-Bronson

avatar
Master Collaborator

Please can you try to run below command, hdp-select sets a given version to be the current version, by creating appropriate symlinks to the folder with appropriate version number.


hdp-select status | grep -i hdfs

avatar
 hdp-select status | grep -i hdfs
hadoop-hdfs-client - 2.6.4.0-91
hadoop-hdfs-datanode - 2.6.4.0-91
Michael-Bronson