<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: ls: Operation category READ is not supported in state standby in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ls-Operation-category-READ-is-not-supported-in-state-standby/m-p/61775#M70872</link>
    <description>&lt;P&gt;As noted in the previous reply, I did not have any nodes with the Failover Controller role.&amp;nbsp; Importantly, I also had not enabled Automatic Failover despite running in an HA configuration.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I went ahead and added the Failover Controller role to both namenodes - the good one and the bad one.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;After that, I attempted enable the Automatic Failover using the link shown in the screenshot from this &lt;A href="http://community.cloudera.com/t5/Storage-Random-Access-HDFS/Cannot-start-an-HA-namenode-with-name-dirs-that-need-recovery/m-p/61501#M3294" target="_self"&gt;post&lt;/A&gt;.&amp;nbsp; To do that, however, I needed to first start Zookeeper.&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;At that point, If I recall correctly, the other namenode was still not active but I then restarted the entire cluster and the automatic failover kicked in, using the other namenode as the active one and leaving the bad namenode in a stopped state.&lt;/P&gt;</description>
    <pubDate>Mon, 13 Nov 2017 19:27:15 GMT</pubDate>
    <dc:creator>epowell</dc:creator>
    <dc:date>2017-11-13T19:27:15Z</dc:date>
    <item>
      <title>ls: Operation category READ is not supported in state standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ls-Operation-category-READ-is-not-supported-in-state-standby/m-p/61578#M70864</link>
      <description>&lt;P&gt;I currently have one namenode in a 'stopped' state due to a node failure.&amp;nbsp; I am unable to access any data or services on the cluster, as this was the main namenode.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;However, there is a second namenode that I am hoping can be used to recover.&amp;nbsp; I have been working on the issue in this &lt;A href="https://community.cloudera.com/t5/Storage-Random-Access-HDFS/Cannot-start-an-HA-namenode-with-name-dirs-that-need-recovery/td-p/61468" target="_self"&gt;thread&lt;/A&gt; and currently I all hdfs instances started except for the bad namenode.&amp;nbsp; This seems to have improved the situation as far as node health status but I still can't access any data.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Here is the relevant command and error:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;ubuntu@ip-10-0-0-154:~/backup/data1$ hdfs dfs -ls hdfs://10.0.0.154:8020/
ls: Operation category READ is not supported in state standby&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;In the previous thread, I also pointed out that there was the option to enable automatic failure in CM.&amp;nbsp; I am wondering if that is the best course of action right now.&amp;nbsp; Any help is greatly appreciated.&lt;/P&gt;</description>
      <pubDate>Tue, 07 Nov 2017 16:37:14 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ls-Operation-category-READ-is-not-supported-in-state-standby/m-p/61578#M70864</guid>
      <dc:creator>epowell</dc:creator>
      <dc:date>2017-11-07T16:37:14Z</dc:date>
    </item>
    <item>
      <title>Re: ls: Operation category READ is not supported in state standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ls-Operation-category-READ-is-not-supported-in-state-standby/m-p/61593#M70865</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/21662"&gt;@epowell&lt;/a&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;The issue might be related to the below jira which is opened a long back still in open status&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;A href="https://issues.apache.org/jira/browse/HDFS-3447" target="_blank"&gt;https://issues.apache.org/jira/browse/HDFS-3447&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;as an alternate&amp;nbsp;way to connect to hdfs, go to hdfs-site.xml and get&amp;nbsp;&lt;SPAN&gt;dfs.nameservices and try to connect to hdfs using namespace as follows, it may help you&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;hdfs://&amp;lt;ClusterName&amp;gt;-ns/&amp;lt;hdfs_path&amp;gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;Note: I didn't get a chance to explore this... also not sure how it will respond in old cdh version&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 08 Nov 2017 04:03:33 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ls-Operation-category-READ-is-not-supported-in-state-standby/m-p/61593#M70865</guid>
      <dc:creator>saranvisa</dc:creator>
      <dc:date>2017-11-08T04:03:33Z</dc:date>
    </item>
    <item>
      <title>Re: ls: Operation category READ is not supported in state standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ls-Operation-category-READ-is-not-supported-in-state-standby/m-p/61594#M70866</link>
      <description>&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thank you for your response.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I followed you advice below but I am getting&amp;nbsp;the&amp;nbsp;error below.&amp;nbsp; This is the same error as when I try a plain 'hdfs dfs -ls' command.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;root@ip-10-0-0-154:/home/ubuntu/backup/data1# grep -B 1 -A 2 nameservices /var/run/cloudera-scm-agent/process/9908-hdfs-NAMENODE/hdfs-site.xml 
  &amp;lt;property&amp;gt;
    &amp;lt;name&amp;gt;dfs.nameservices&amp;lt;/name&amp;gt;
    &amp;lt;value&amp;gt;nameservice1&amp;lt;/value&amp;gt;
  &amp;lt;/property&amp;gt;
ubuntu@ip-10-0-0-154:~/backup/data1$ hdfs dfs -ls hdfs://nameservice1/
17/11/08 04:29:50 WARN retry.RetryInvocationHandler: Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB after 1 fail over attempts. Trying to fail over after sleeping for 796ms.&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Also, I should mention that when I go to CM, it shows that my one good namenode is in 'standby'.&amp;nbsp; &amp;nbsp;Would it help to try a command like this?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;PRE&gt;./hdfs haadmin -transitionToActive &amp;lt;nodename&amp;gt;&lt;/PRE&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;A second thing is that CM shows Automatic Failover is not enabled but there is a link to 'Enable' (see screenshot).&amp;nbsp; Maybe this is another option&amp;nbsp;to&amp;nbsp;help the standby node get promoted to active?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screenshot 2017-11-07 at 21.26.49.png" style="width: 600px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/3563iB428281DE2CD6C5F/image-size/large?v=v2&amp;amp;px=999" role="button" title="Screenshot 2017-11-07 at 21.26.49.png" alt="Screenshot 2017-11-07 at 21.26.49.png" /&gt;&lt;/span&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 08 Nov 2017 04:28:20 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ls-Operation-category-READ-is-not-supported-in-state-standby/m-p/61594#M70866</guid>
      <dc:creator>epowell</dc:creator>
      <dc:date>2017-11-08T04:28:20Z</dc:date>
    </item>
    <item>
      <title>Re: ls: Operation category READ is not supported in state standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ls-Operation-category-READ-is-not-supported-in-state-standby/m-p/61596#M70867</link>
      <description>Is the Failover Controller daemon running on the remaining NameNode? If&lt;BR /&gt;not, start it up so it may elect its local NameNode into the ACTIVE state&lt;BR /&gt;</description>
      <pubDate>Wed, 08 Nov 2017 04:54:37 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ls-Operation-category-READ-is-not-supported-in-state-standby/m-p/61596#M70867</guid>
      <dc:creator>Harsh J</dc:creator>
      <dc:date>2017-11-08T04:54:37Z</dc:date>
    </item>
    <item>
      <title>Re: ls: Operation category READ is not supported in state standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ls-Operation-category-READ-is-not-supported-in-state-standby/m-p/61597#M70868</link>
      <description>&lt;P&gt;I do not know how to check if the "&lt;SPAN&gt;Failover Controller daemon running on the remaining NameNode".&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Can you please tell me how to check?&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Wed, 08 Nov 2017 04:56:48 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ls-Operation-category-READ-is-not-supported-in-state-standby/m-p/61597#M70868</guid>
      <dc:creator>epowell</dc:creator>
      <dc:date>2017-11-08T04:56:48Z</dc:date>
    </item>
    <item>
      <title>Re: ls: Operation category READ is not supported in state standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ls-Operation-category-READ-is-not-supported-in-state-standby/m-p/61598#M70869</link>
      <description>If you're using Cloudera Manager, you can see the Failover Controller role instances and their states under the HDFS -&amp;gt; Instances tab.&lt;BR /&gt;&lt;BR /&gt;If you're managing CDH without Cloudera Manager, then you can check on the NameNode host(s) with the below command:&lt;BR /&gt;&lt;BR /&gt;$ sudo service hadoop-hdfs-zkfc status</description>
      <pubDate>Wed, 08 Nov 2017 05:48:10 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ls-Operation-category-READ-is-not-supported-in-state-standby/m-p/61598#M70869</guid>
      <dc:creator>Harsh J</dc:creator>
      <dc:date>2017-11-08T05:48:10Z</dc:date>
    </item>
    <item>
      <title>Re: ls: Operation category READ is not supported in state standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ls-Operation-category-READ-is-not-supported-in-state-standby/m-p/61600#M70870</link>
      <description>If you're instead using tarball or an unmanaged installation, the command to run the failover controller is:&lt;BR /&gt;&lt;BR /&gt;$ hadoop-daemon.sh start zkfc&lt;BR /&gt;&lt;BR /&gt;Or for a more interactive style:&lt;BR /&gt;&lt;BR /&gt;$ hdfs zkfc</description>
      <pubDate>Wed, 08 Nov 2017 05:55:44 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ls-Operation-category-READ-is-not-supported-in-state-standby/m-p/61600#M70870</guid>
      <dc:creator>Harsh J</dc:creator>
      <dc:date>2017-11-08T05:55:44Z</dc:date>
    </item>
    <item>
      <title>Re: ls: Operation category READ is not supported in state standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ls-Operation-category-READ-is-not-supported-in-state-standby/m-p/61639#M70871</link>
      <description>&lt;P&gt;It appears I do not have any nodes with the Failover Controller role.&amp;nbsp; The screenshot below shows the hdfs instances filtered by that role.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="Screen Shot 2017-11-08 at 9.49.35 AM.png" style="width: 600px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/3564i049AEEB8AA48A38E/image-size/large?v=v2&amp;amp;px=999" role="button" title="Screen Shot 2017-11-08 at 9.49.35 AM.png" alt="Screen Shot 2017-11-08 at 9.49.35 AM.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Wed, 08 Nov 2017 16:51:38 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ls-Operation-category-READ-is-not-supported-in-state-standby/m-p/61639#M70871</guid>
      <dc:creator>epowell</dc:creator>
      <dc:date>2017-11-08T16:51:38Z</dc:date>
    </item>
    <item>
      <title>Re: ls: Operation category READ is not supported in state standby</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/ls-Operation-category-READ-is-not-supported-in-state-standby/m-p/61775#M70872</link>
      <description>&lt;P&gt;As noted in the previous reply, I did not have any nodes with the Failover Controller role.&amp;nbsp; Importantly, I also had not enabled Automatic Failover despite running in an HA configuration.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;I went ahead and added the Failover Controller role to both namenodes - the good one and the bad one.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;After that, I attempted enable the Automatic Failover using the link shown in the screenshot from this &lt;A href="http://community.cloudera.com/t5/Storage-Random-Access-HDFS/Cannot-start-an-HA-namenode-with-name-dirs-that-need-recovery/m-p/61501#M3294" target="_self"&gt;post&lt;/A&gt;.&amp;nbsp; To do that, however, I needed to first start Zookeeper.&amp;nbsp;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;At that point, If I recall correctly, the other namenode was still not active but I then restarted the entire cluster and the automatic failover kicked in, using the other namenode as the active one and leaving the bad namenode in a stopped state.&lt;/P&gt;</description>
      <pubDate>Mon, 13 Nov 2017 19:27:15 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/ls-Operation-category-READ-is-not-supported-in-state-standby/m-p/61775#M70872</guid>
      <dc:creator>epowell</dc:creator>
      <dc:date>2017-11-13T19:27:15Z</dc:date>
    </item>
  </channel>
</rss>

