<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question DataXceiver error processing read and write operations hadoop in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/DataXceiver-error-processing-read-and-write-operations/m-p/229680#M191533</link>
    <description>&lt;P&gt;
        0
        down vote

        &lt;A href="https://stackoverflow.com/questions/45932926/dataxceiver-error-processing-read-and-write-operations-hadoop#"&gt;favorite&lt;/A&gt;
        


    &lt;/P&gt;&lt;P&gt;iam using hadoop apache 2.7.1  on centos 7 &lt;/P&gt;

&lt;P&gt;in HA cluster that consists of two namenodes and  6 data nodes 
and i realized the following error that always found in my log &lt;/P&gt;

&lt;PRE&gt;&lt;CODE&gt;DataXceiver error processing WRITE_BLOCK operation  src: /172.16.1.153:38360 dst: /172.16.1.153:50010
java.io.IOException: Connection reset by peer
&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;so i updated the following propetey in hdfs-site.xml
in order to increase the number of available threads in all datanodes &lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;&amp;lt;property&amp;gt;

    &amp;lt;name&amp;gt;dfs.datanode.max.transfer.threads&amp;lt;/name&amp;gt;

    &amp;lt;value&amp;gt;16000&amp;lt;/value&amp;gt;

&amp;lt;/property&amp;gt;
&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;and i increased the number of open files too by editing baschrc &lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;ulimit -n 16384
&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;but i still getting this error in my data nodes logs &lt;/P&gt;&lt;P&gt;so  while sending write requests to the cluster &lt;/P&gt;&lt;P&gt;i issued the following command on data nodes to know number of threads  &lt;/P&gt;&lt;PRE&gt;&lt;CODE&gt;cat /proc/processid/status
&lt;/CODE&gt;&lt;/PRE&gt;&lt;P&gt;and they never exceeds 100 thread &lt;/P&gt;&lt;P&gt;and in in order to know open files number  issued 
sysctl fs.file-nr&lt;/P&gt;&lt;P&gt;and they never exceeds 300 open files&lt;/P&gt;&lt;P&gt;so why iam always getting this error in data node logs and what is it's effect on performance &lt;/P&gt;</description>
    <pubDate>Tue, 29 Aug 2017 14:17:03 GMT</pubDate>
    <dc:creator>oula_alshiekh</dc:creator>
    <dc:date>2017-08-29T14:17:03Z</dc:date>
  </channel>
</rss>

