<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question History server fails to start on a new HA HDP 2.3.4.7.4  cluster in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/History-server-fails-to-start-on-a-new-HA-HDP-2-3-4-7-4/m-p/157005#M119418</link>
    <description>&lt;P&gt;HDP-2.3.4.7-4 &lt;/P&gt;&lt;P&gt;Ambari Version 2.2.1.1 &lt;/P&gt;&lt;P&gt;All services are up and running except for History server. Could not find any related errors in namenode or data node logs.&lt;/P&gt;&lt;P&gt;Following is the error reported by Ambari.&lt;/P&gt;&lt;P&gt; File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 191, in run_command
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X PUT -T /usr/hdp/2.3.4.7-4/hadoop/mapreduce.tar.gz 'http://standbynamenode.sample.com:50070/webhdfs/v1/hdp/apps/2.3.4.7-4/mapreduce/mapreduce.tar.gz?op=CREATE&amp;amp;user.name=hdfs&amp;amp;overwrite=True&amp;amp;permission=444'' returned status_code=403. 
{
  "RemoteException": {
    "exception": "ConnectException", 
    "javaClassName": "java.net.ConnectException", 
    "message": "Call From datanode.sample.com/10.250.98.101 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused"
  }
}&lt;/P&gt;&lt;P&gt;Status code: 403 indicates that the request is correct, but not probably authroized?&lt;/P&gt;&lt;P&gt;Any pointers will be helpful.&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;</description>
    <pubDate>Sat, 25 Jun 2016 00:42:30 GMT</pubDate>
    <dc:creator>mohanamurali_gu</dc:creator>
    <dc:date>2016-06-25T00:42:30Z</dc:date>
    <item>
      <title>History server fails to start on a new HA HDP 2.3.4.7.4  cluster</title>
      <link>https://community.cloudera.com/t5/Support-Questions/History-server-fails-to-start-on-a-new-HA-HDP-2-3-4-7-4/m-p/157005#M119418</link>
      <description>&lt;P&gt;HDP-2.3.4.7-4 &lt;/P&gt;&lt;P&gt;Ambari Version 2.2.1.1 &lt;/P&gt;&lt;P&gt;All services are up and running except for History server. Could not find any related errors in namenode or data node logs.&lt;/P&gt;&lt;P&gt;Following is the error reported by Ambari.&lt;/P&gt;&lt;P&gt; File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 191, in run_command
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X PUT -T /usr/hdp/2.3.4.7-4/hadoop/mapreduce.tar.gz 'http://standbynamenode.sample.com:50070/webhdfs/v1/hdp/apps/2.3.4.7-4/mapreduce/mapreduce.tar.gz?op=CREATE&amp;amp;user.name=hdfs&amp;amp;overwrite=True&amp;amp;permission=444'' returned status_code=403. 
{
  "RemoteException": {
    "exception": "ConnectException", 
    "javaClassName": "java.net.ConnectException", 
    "message": "Call From datanode.sample.com/10.250.98.101 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused"
  }
}&lt;/P&gt;&lt;P&gt;Status code: 403 indicates that the request is correct, but not probably authroized?&lt;/P&gt;&lt;P&gt;Any pointers will be helpful.&lt;/P&gt;&lt;P&gt;Thanks,&lt;/P&gt;</description>
      <pubDate>Sat, 25 Jun 2016 00:42:30 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/History-server-fails-to-start-on-a-new-HA-HDP-2-3-4-7-4/m-p/157005#M119418</guid>
      <dc:creator>mohanamurali_gu</dc:creator>
      <dc:date>2016-06-25T00:42:30Z</dc:date>
    </item>
    <item>
      <title>Re: History server fails to start on a new HA HDP 2.3.4.7.4  cluster</title>
      <link>https://community.cloudera.com/t5/Support-Questions/History-server-fails-to-start-on-a-new-HA-HDP-2-3-4-7-4/m-p/157006#M119419</link>
      <description>&lt;A rel="user" href="https://community.cloudera.com/users/3779/mohanamuraligurunathan.html" nodeid="3779"&gt;@Mohana Murali Gurunathan&lt;/A&gt;&lt;P&gt;I can see that while starting it is trying to write /usr/hdp/2.3.4.7-4/hadoop/mapreduce.tar.gz at /hdp/apps/2.3.4.7-4/mapreduce/mapreduce.tar.gz on hdfs. it's unable to write to HDFS because of connection refused error.&lt;/P&gt;&lt;P&gt;If you look at the logs carefully, you can see that instead of namenode hostname, datanode is trying to connect to localhost:8020 which is failing as expected.&lt;/P&gt;&lt;PRE&gt;exception": "ConnectException", "javaClassName": "java.net.ConnectException", "message": "Call Fromdatanode.sample.com/10.250.98.101 to localhost:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: &lt;A href="http://wiki.apache.org/hadoop/ConnectionRefused" target="_blank"&gt;http://wiki.apache.org/hadoop/ConnectionRefused&lt;/A&gt;&lt;/PRE&gt;&lt;P&gt;Can you please check /etc/hosts file on all the datanodes just to ensure that you have added correct entries for the namennode?&lt;/P&gt;</description>
      <pubDate>Sat, 25 Jun 2016 01:04:35 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/History-server-fails-to-start-on-a-new-HA-HDP-2-3-4-7-4/m-p/157006#M119419</guid>
      <dc:creator>KuldeepK</dc:creator>
      <dc:date>2016-06-25T01:04:35Z</dc:date>
    </item>
    <item>
      <title>Re: History server fails to start on a new HA HDP 2.3.4.7.4  cluster</title>
      <link>https://community.cloudera.com/t5/Support-Questions/History-server-fails-to-start-on-a-new-HA-HDP-2-3-4-7-4/m-p/157007#M119420</link>
      <description>&lt;P&gt;@Kuldeep - Yes, the /etc/hosts file on all the nodes (including data nodes) have the right details for namenode and other nodes in the cluster. True, it is really not clear, why datanode is trying to connect to 8020 in the localhost. It should have contacted the namenode. This is a fresh cluster created and no operations have started yet.&lt;/P&gt;</description>
      <pubDate>Sat, 25 Jun 2016 01:19:48 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/History-server-fails-to-start-on-a-new-HA-HDP-2-3-4-7-4/m-p/157007#M119420</guid>
      <dc:creator>mohanamurali_gu</dc:creator>
      <dc:date>2016-06-25T01:19:48Z</dc:date>
    </item>
    <item>
      <title>Re: History server fails to start on a new HA HDP 2.3.4.7.4  cluster</title>
      <link>https://community.cloudera.com/t5/Support-Questions/History-server-fails-to-start-on-a-new-HA-HDP-2-3-4-7-4/m-p/157008#M119421</link>
      <description>&lt;P&gt;@Kuldeep - tried some hadoop operations like ls or put&lt;/P&gt;&lt;P&gt;every command is failing as each of the requests is connecting to localhost:8020 rather than any of the namenode or standby name node. Checked the configs involvng 8020. see the attached file&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.cloudera.com/legacyfs/online/attachments/5233-8020.jpg"&gt;8020.jpg&lt;/A&gt;&lt;/P&gt;</description>
      <pubDate>Sat, 25 Jun 2016 01:30:30 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/History-server-fails-to-start-on-a-new-HA-HDP-2-3-4-7-4/m-p/157008#M119421</guid>
      <dc:creator>mohanamurali_gu</dc:creator>
      <dc:date>2016-06-25T01:30:30Z</dc:date>
    </item>
    <item>
      <title>Re: History server fails to start on a new HA HDP 2.3.4.7.4  cluster</title>
      <link>https://community.cloudera.com/t5/Support-Questions/History-server-fails-to-start-on-a-new-HA-HDP-2-3-4-7-4/m-p/157009#M119422</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/3779/mohanamuraligurunathan.html" nodeid="3779"&gt;@Mohana Murali Gurunathan&lt;/A&gt; - Please remove localhost and add hostname of your namenode in the configuration for fs.defaultFS.&lt;/P&gt;&lt;P&gt;current value - localhost:8020&lt;/P&gt;&lt;P&gt;recommended value - &amp;lt;hostname-of-namenode&amp;gt;:8020&lt;/P&gt;</description>
      <pubDate>Sat, 25 Jun 2016 16:24:46 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/History-server-fails-to-start-on-a-new-HA-HDP-2-3-4-7-4/m-p/157009#M119422</guid>
      <dc:creator>KuldeepK</dc:creator>
      <dc:date>2016-06-25T16:24:46Z</dc:date>
    </item>
    <item>
      <title>Re: History server fails to start on a new HA HDP 2.3.4.7.4  cluster</title>
      <link>https://community.cloudera.com/t5/Support-Questions/History-server-fails-to-start-on-a-new-HA-HDP-2-3-4-7-4/m-p/157010#M119423</link>
      <description>&lt;P&gt;Got it! &lt;/P&gt;&lt;P&gt;fs.defaultFS - This is in core-site.xml.&lt;/P&gt;&lt;P&gt;The value should be set to hdfs://namespaceid (where namespace id is the namespace that has been defined in the cluster). It works&lt;/P&gt;</description>
      <pubDate>Sat, 25 Jun 2016 16:25:59 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/History-server-fails-to-start-on-a-new-HA-HDP-2-3-4-7-4/m-p/157010#M119423</guid>
      <dc:creator>mohanamurali_gu</dc:creator>
      <dc:date>2016-06-25T16:25:59Z</dc:date>
    </item>
    <item>
      <title>Re: History server fails to start on a new HA HDP 2.3.4.7.4  cluster</title>
      <link>https://community.cloudera.com/t5/Support-Questions/History-server-fails-to-start-on-a-new-HA-HDP-2-3-4-7-4/m-p/157011#M119424</link>
      <description>&lt;P&gt;Thanks Kuldeep. for your inputs. Finally found the reason - the value should be the namespace that we have chosen for the cluster - reason - the cluster I was trying is a HA cluster. So, if we put a specific host name, we will be in trouble, if the host is not available (if it is down). By keeping the namespace, things are better. Thanks for your inputs.&lt;/P&gt;</description>
      <pubDate>Sat, 25 Jun 2016 16:30:27 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/History-server-fails-to-start-on-a-new-HA-HDP-2-3-4-7-4/m-p/157011#M119424</guid>
      <dc:creator>mohanamurali_gu</dc:creator>
      <dc:date>2016-06-25T16:30:27Z</dc:date>
    </item>
    <item>
      <title>Re: History server fails to start on a new HA HDP 2.3.4.7.4  cluster</title>
      <link>https://community.cloudera.com/t5/Support-Questions/History-server-fails-to-start-on-a-new-HA-HDP-2-3-4-7-4/m-p/157012#M119425</link>
      <description>&lt;P&gt;Pls. note the fact that the namepsaceid referred here is not the one you find in the file /hadoop/hdfs/namenode/current/VERSION. But, it is the value of the following property - dfs.nameservices&lt;/P&gt;</description>
      <pubDate>Sat, 25 Jun 2016 16:35:12 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/History-server-fails-to-start-on-a-new-HA-HDP-2-3-4-7-4/m-p/157012#M119425</guid>
      <dc:creator>mohanamurali_gu</dc:creator>
      <dc:date>2016-06-25T16:35:12Z</dc:date>
    </item>
    <item>
      <title>Re: History server fails to start on a new HA HDP 2.3.4.7.4  cluster</title>
      <link>https://community.cloudera.com/t5/Support-Questions/History-server-fails-to-start-on-a-new-HA-HDP-2-3-4-7-4/m-p/157013#M119426</link>
      <description>&lt;P&gt;please I have the same problem but I don't understand your reply.could you please explain&lt;/P&gt;</description>
      <pubDate>Wed, 21 Dec 2016 07:22:59 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/History-server-fails-to-start-on-a-new-HA-HDP-2-3-4-7-4/m-p/157013#M119426</guid>
      <dc:creator>wael_horchani1</dc:creator>
      <dc:date>2016-12-21T07:22:59Z</dc:date>
    </item>
  </channel>
</rss>

