<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question In Hadoop 2.7.2(CentOS 7) Cluster ,Datanode starts but doesn't connect to namenode in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/In-Hadoop-2-7-2-CentOS-7-Cluster-Datanode-starts-but-doesn-t/m-p/153067#M32732</link>
    <description>&lt;P&gt;
	I installed a three node hadoop cluster. The master and slave node starts separately but the datanode isn't shown in namenode webUI. The log file for datanode shows the following error :&lt;/P&gt;&lt;PRE&gt;2016-06-18 21:23:53,980 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.1.100:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-06-18 21:23:55,029 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.1.100:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-06-18 21:23:56,030 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.1.100:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-06-18 21:23:57,031 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.1.100:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-06-18 21:23:58,032 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.1.100:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)&lt;/PRE&gt;&lt;P&gt;namenode machine's infomation:&lt;/P&gt;&lt;P&gt;cat /etc/hosts&lt;/P&gt;&lt;PRE&gt;#127.0.0.1   localhost localhost.localdomain localhost4            localhost4.localdomain4
#::1         localhost localhost.localdomain localhost6        localhost6.localdomain6
192.168.1.100 namenode
192.168.1.101 datanode1
192.168.1.102 datanode2&lt;/PRE&gt;&lt;P&gt;cat /etc/sysconfig/network-scripts/ifcfg-eth0&lt;/P&gt;&lt;PRE&gt;DEVICE=eth0
IPV6INIT=yes
BOOTPROTO=dhcp
UUID=61fe61d3-fcda-4fed-ba81-bfa767e0270a
ONBOOT=yes
TYPE=Ethernet
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME="System eth0"
BOOTPROTO="static" 
ONBOOT="yes" 
IPADDR=192.168.1.100 
GATEWAY=192.168.1.1 
NETMASK=255.255.255.0 
DNS1=192.168.1.1 &lt;/PRE&gt;&lt;P&gt;cat /etc/hostname&lt;/P&gt;&lt;P&gt;namenode&lt;/P&gt;&lt;P&gt;cat core-site.xml&lt;/P&gt;&lt;PRE&gt;&amp;lt;configuration&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;hadoop.tmp.dir&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;/home/hadoop/tmp&amp;lt;/value&amp;gt;
    &amp;lt;description&amp;gt;Abase for other temporary directories.&amp;lt;/description&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;fs.defaultFS&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;hdfs://namenode:9000&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;io.file.buffer.size&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;4096&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;&lt;/PRE&gt;&lt;P&gt;cat hdfs-site.xml&lt;/P&gt;&lt;PRE&gt;&amp;lt;configuration&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.nameservices&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;hadoop-cluster1&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.namenode.secondary.http-address&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;namenode:50090&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.namenode.name.dir&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;file:///home/hadoop/dfs/name&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.datanode.data.dir&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;file:///home/hadoop/dfs/data&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.replication&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;2&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.webhdfs.enabled&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;&lt;/PRE&gt;&lt;P&gt;cat mapred-site.xml&lt;/P&gt;&lt;PRE&gt;&amp;lt;configuration&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;mapreduce.framework.name&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;yarn&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;mapreduce.jobtracker.http.address&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;namenode:50030&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;mapreduce.jobhistory.address&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;namenode:10020&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;mapreduce.jobhistory.webapp.address&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;namenode:19888&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;&lt;/PRE&gt;&lt;P&gt;cat yarn-site.xml&lt;/P&gt;&lt;PRE&gt;&amp;lt;configuration&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;yarn.nodemanager.aux-services&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;mapreduce_shuffle&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;yarn.resourcemanager.address&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;namenode:8032&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;yarn.resourcemanager.scheduler.address&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;namenode:8030&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;yarn.resourcemanager.resource-tracker.address&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;namenode:8031&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;yarn.resourcemanager.admin.address&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;namenode:8033&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;yarn.resourcemanager.webapp.address&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;namenode:8088&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;



&lt;/PRE&gt;&lt;P&gt;cat slaves&lt;/P&gt;&lt;PRE&gt;datanode1
datanode2&lt;/PRE&gt;</description>
    <pubDate>Thu, 23 Jun 2016 14:37:09 GMT</pubDate>
    <dc:creator>xunlei1221</dc:creator>
    <dc:date>2016-06-23T14:37:09Z</dc:date>
    <item>
      <title>In Hadoop 2.7.2(CentOS 7) Cluster ,Datanode starts but doesn't connect to namenode</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/In-Hadoop-2-7-2-CentOS-7-Cluster-Datanode-starts-but-doesn-t/m-p/153067#M32732</link>
      <description>&lt;P&gt;
	I installed a three node hadoop cluster. The master and slave node starts separately but the datanode isn't shown in namenode webUI. The log file for datanode shows the following error :&lt;/P&gt;&lt;PRE&gt;2016-06-18 21:23:53,980 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.1.100:9000. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-06-18 21:23:55,029 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.1.100:9000. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-06-18 21:23:56,030 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.1.100:9000. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-06-18 21:23:57,031 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.1.100:9000. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
2016-06-18 21:23:58,032 INFO org.apache.hadoop.ipc.Client: Retrying connect to server: namenode/192.168.1.100:9000. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)&lt;/PRE&gt;&lt;P&gt;namenode machine's infomation:&lt;/P&gt;&lt;P&gt;cat /etc/hosts&lt;/P&gt;&lt;PRE&gt;#127.0.0.1   localhost localhost.localdomain localhost4            localhost4.localdomain4
#::1         localhost localhost.localdomain localhost6        localhost6.localdomain6
192.168.1.100 namenode
192.168.1.101 datanode1
192.168.1.102 datanode2&lt;/PRE&gt;&lt;P&gt;cat /etc/sysconfig/network-scripts/ifcfg-eth0&lt;/P&gt;&lt;PRE&gt;DEVICE=eth0
IPV6INIT=yes
BOOTPROTO=dhcp
UUID=61fe61d3-fcda-4fed-ba81-bfa767e0270a
ONBOOT=yes
TYPE=Ethernet
DEFROUTE=yes
PEERDNS=yes
PEERROUTES=yes
IPV4_FAILURE_FATAL=no
IPV6_AUTOCONF=yes
IPV6_DEFROUTE=yes
IPV6_PEERDNS=yes
IPV6_PEERROUTES=yes
IPV6_FAILURE_FATAL=no
NAME="System eth0"
BOOTPROTO="static" 
ONBOOT="yes" 
IPADDR=192.168.1.100 
GATEWAY=192.168.1.1 
NETMASK=255.255.255.0 
DNS1=192.168.1.1 &lt;/PRE&gt;&lt;P&gt;cat /etc/hostname&lt;/P&gt;&lt;P&gt;namenode&lt;/P&gt;&lt;P&gt;cat core-site.xml&lt;/P&gt;&lt;PRE&gt;&amp;lt;configuration&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;hadoop.tmp.dir&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;/home/hadoop/tmp&amp;lt;/value&amp;gt;
    &amp;lt;description&amp;gt;Abase for other temporary directories.&amp;lt;/description&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;fs.defaultFS&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;hdfs://namenode:9000&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;io.file.buffer.size&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;4096&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;&lt;/PRE&gt;&lt;P&gt;cat hdfs-site.xml&lt;/P&gt;&lt;PRE&gt;&amp;lt;configuration&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.nameservices&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;hadoop-cluster1&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.namenode.secondary.http-address&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;namenode:50090&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.namenode.name.dir&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;file:///home/hadoop/dfs/name&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.datanode.data.dir&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;file:///home/hadoop/dfs/data&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.replication&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;2&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.webhdfs.enabled&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;true&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;&lt;/PRE&gt;&lt;P&gt;cat mapred-site.xml&lt;/P&gt;&lt;PRE&gt;&amp;lt;configuration&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;mapreduce.framework.name&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;yarn&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;mapreduce.jobtracker.http.address&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;namenode:50030&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;mapreduce.jobhistory.address&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;namenode:10020&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;mapreduce.jobhistory.webapp.address&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;namenode:19888&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;&lt;/PRE&gt;&lt;P&gt;cat yarn-site.xml&lt;/P&gt;&lt;PRE&gt;&amp;lt;configuration&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;yarn.nodemanager.aux-services&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;mapreduce_shuffle&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;yarn.resourcemanager.address&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;namenode:8032&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;yarn.resourcemanager.scheduler.address&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;namenode:8030&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;yarn.resourcemanager.resource-tracker.address&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;namenode:8031&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;yarn.resourcemanager.admin.address&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;namenode:8033&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;
&amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;yarn.resourcemanager.webapp.address&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;namenode:8088&amp;lt;/value&amp;gt;
&amp;lt;/property&amp;gt;



&lt;/PRE&gt;&lt;P&gt;cat slaves&lt;/P&gt;&lt;PRE&gt;datanode1
datanode2&lt;/PRE&gt;</description>
      <pubDate>Thu, 23 Jun 2016 14:37:09 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/In-Hadoop-2-7-2-CentOS-7-Cluster-Datanode-starts-but-doesn-t/m-p/153067#M32732</guid>
      <dc:creator>xunlei1221</dc:creator>
      <dc:date>2016-06-23T14:37:09Z</dc:date>
    </item>
    <item>
      <title>Re: In Hadoop 2.7.2(CentOS 7) Cluster ,Datanode starts but doesn't connect to namenode</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/In-Hadoop-2-7-2-CentOS-7-Cluster-Datanode-starts-but-doesn-t/m-p/153068#M32733</link>
      <description>&lt;P&gt;First please check if namenode is up and running. If yes, then from datanode host run below command and see if it able to connect to the NN port.&lt;/P&gt;&lt;P&gt;From DN:&lt;/P&gt;&lt;PRE&gt;telnet 192.168.1.100 9000&lt;/PRE&gt;&lt;P&gt;If telnet is not responding then you might have firewall rules blocking the connection. Try disabling firewall and see if that make any difference.&lt;/P&gt;</description>
      <pubDate>Thu, 23 Jun 2016 15:29:49 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/In-Hadoop-2-7-2-CentOS-7-Cluster-Datanode-starts-but-doesn-t/m-p/153068#M32733</guid>
      <dc:creator>jyadav</dc:creator>
      <dc:date>2016-06-23T15:29:49Z</dc:date>
    </item>
    <item>
      <title>Re: In Hadoop 2.7.2(CentOS 7) Cluster ,Datanode starts but doesn't connect to namenode</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/In-Hadoop-2-7-2-CentOS-7-Cluster-Datanode-starts-but-doesn-t/m-p/153069#M32734</link>
      <description>&lt;P&gt;you are right,the firewall is running&lt;/P&gt;</description>
      <pubDate>Thu, 23 Jun 2016 20:18:58 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/In-Hadoop-2-7-2-CentOS-7-Cluster-Datanode-starts-but-doesn-t/m-p/153069#M32734</guid>
      <dc:creator>xunlei1221</dc:creator>
      <dc:date>2016-06-23T20:18:58Z</dc:date>
    </item>
  </channel>
</rss>

