<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Errors using webhdfs restful api with high availability namenode that failed over in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Errors-using-webhdfs-restful-api-with-high-availability/m-p/4385#M444</link>
    <description>You're right that you'll need to build your own failover on the client side for WebHDFS as it presently lacks HA awareness and support.&lt;BR /&gt;&lt;BR /&gt;Another easier alternative is to setup and use HttpFs as the REST gateway, which is HA-aware and offers the exact same WebHDFS API and functionality.</description>
    <pubDate>Sun, 29 Dec 2013 05:13:13 GMT</pubDate>
    <dc:creator>Harsh J</dc:creator>
    <dc:date>2013-12-29T05:13:13Z</dc:date>
    <item>
      <title>Errors using webhdfs restful api with high availability namenode that failed over</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Errors-using-webhdfs-restful-api-with-high-availability/m-p/2535#M443</link>
      <description>&lt;P&gt;When using curl to put data via the webhdfs restful api&amp;nbsp;to a cluster with a high availability name node that has failed over&amp;nbsp;there is an error message that the name node is on standby and the write fails.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Basically the curl put request goes directly&amp;nbsp;to the standby name node and it gets a referral to a data node that also includes itself as the name node rather than the active namenode.&amp;nbsp; It would be nice the referral included the active name node so the subsequent write would work even though the original request went to the standby node.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;We are using curl because it simpler and a lighter weight than running a full hdfs client or using flume.&amp;nbsp; Obviously we can&amp;nbsp;do our own failover on the client, although&amp;nbsp;that needs knowledge of&amp;nbsp;name nodes.&amp;nbsp; Is there a better way to do this using the webhfds restful api?&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 08:49:35 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Errors-using-webhdfs-restful-api-with-high-availability/m-p/2535#M443</guid>
      <dc:creator>Gordo</dc:creator>
      <dc:date>2022-09-16T08:49:35Z</dc:date>
    </item>
    <item>
      <title>Re: Errors using webhdfs restful api with high availability namenode that failed over</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Errors-using-webhdfs-restful-api-with-high-availability/m-p/4385#M444</link>
      <description>You're right that you'll need to build your own failover on the client side for WebHDFS as it presently lacks HA awareness and support.&lt;BR /&gt;&lt;BR /&gt;Another easier alternative is to setup and use HttpFs as the REST gateway, which is HA-aware and offers the exact same WebHDFS API and functionality.</description>
      <pubDate>Sun, 29 Dec 2013 05:13:13 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Errors-using-webhdfs-restful-api-with-high-availability/m-p/4385#M444</guid>
      <dc:creator>Harsh J</dc:creator>
      <dc:date>2013-12-29T05:13:13Z</dc:date>
    </item>
  </channel>
</rss>

