<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Storm error reading form kafka after upgrade to HDP2.5 in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Storm-error-reading-form-kafka-after-upgrade-to-HDP2-5/m-p/146708#M109263</link>
    <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/14963/kristopherkane.html" nodeid="14963"&gt;@Kristopher Kane&lt;/A&gt; securityProtocol is for connecting to brokers. Its not used by zookeeper client library curator. Curator checks if there is a jaas file provided for the JVM and if it has Client section in it . If so it tries to connect to zookeeper through secure channel. As I said in my previous comment make those changes to connect to non-secure  cluster.&lt;/P&gt;</description>
    <pubDate>Tue, 03 Jan 2017 23:45:39 GMT</pubDate>
    <dc:creator>schintalapani</dc:creator>
    <dc:date>2017-01-03T23:45:39Z</dc:date>
    <item>
      <title>Storm error reading form kafka after upgrade to HDP2.5</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Storm-error-reading-form-kafka-after-upgrade-to-HDP2-5/m-p/146702#M109257</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;We are getting the following error while running a storm topology reading from kafka after upgrading to hdp2.5. This used to work fine with hdp2.3. (Non kerberized kafka, version - 0.8)&lt;/P&gt;&lt;PRE&gt;Unable to get offset lags for kafka. Reason: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /brokers/topics/User/partitions at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1590) at org.apache.curator.shaded.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:214) at org.apache.curator.shaded.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:203) at org.apache.curator.shaded.RetryLoop.callWithRetry(RetryLoop.java:108) at org.apache.curator.shaded.framework.imps.GetChildrenBuilderImpl.pathInForeground(GetChildrenBuilderImpl.java:200) at org.apache.curator.shaded.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:191) at org.apache.curator.shaded.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:38) at org.apache.storm.kafka.monitor.KafkaOffsetLagUtil.getLeadersAndTopicPartitions(KafkaOffsetLagUtil.java:317) at org.apache.storm.kafka.monitor.KafkaOffsetLagUtil.getOffsetLags(KafkaOffsetLagUtil.java:254) at org.apache.storm.kafka.monitor.KafkaOffsetLagUtil.main(KafkaOffsetLagUtil.java:127)&lt;/PRE&gt;&lt;P&gt;Any help would be appreciated.&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;</description>
      <pubDate>Sat, 10 Dec 2016 00:52:34 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Storm-error-reading-form-kafka-after-upgrade-to-HDP2-5/m-p/146702#M109257</guid>
      <dc:creator>abhishek_chama1</dc:creator>
      <dc:date>2016-12-10T00:52:34Z</dc:date>
    </item>
    <item>
      <title>Re: Storm error reading form kafka after upgrade to HDP2.5</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Storm-error-reading-form-kafka-after-upgrade-to-HDP2-5/m-p/146703#M109258</link>
      <description>&lt;A rel="user" href="https://community.cloudera.com/users/14828/abhishekchamakura.html" nodeid="14828"&gt;@Abhishek Reddy Chamakura&lt;/A&gt;&lt;P&gt;&lt;I&gt;&lt;/I&gt;&lt;/P&gt;&lt;P style="display: inline !important;"&gt;This is a known issue which got fixed recently. Please change your storm-kafka dependency to the following and give it a try.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;BLOCKQUOTE&gt;&lt;EM&gt;
&lt;/EM&gt;&lt;EM&gt;&lt;/EM&gt;&lt;PRE&gt;&lt;EM&gt;&amp;lt;dependency&amp;gt; 
 &amp;lt;groupId&amp;gt;org.apache.storm&amp;lt;/groupId&amp;gt; 
 &amp;lt;artifactId&amp;gt;storm-kafka&amp;lt;/artifactId&amp;gt; 
 &amp;lt;version&amp;gt;1.0.1.2.5.3.0-37&amp;lt;/version&amp;gt; 
 &amp;lt;/dependency&amp;gt;&lt;/EM&gt;&lt;/PRE&gt;&lt;/BLOCKQUOTE&gt;</description>
      <pubDate>Sat, 10 Dec 2016 01:47:46 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Storm-error-reading-form-kafka-after-upgrade-to-HDP2-5/m-p/146703#M109258</guid>
      <dc:creator>schintalapani</dc:creator>
      <dc:date>2016-12-10T01:47:46Z</dc:date>
    </item>
    <item>
      <title>Re: Storm error reading form kafka after upgrade to HDP2.5</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Storm-error-reading-form-kafka-after-upgrade-to-HDP2-5/m-p/146704#M109259</link>
      <description>&lt;P&gt;Thanks for the reply. I tried this and still getting the same error.&lt;/P&gt;</description>
      <pubDate>Sat, 10 Dec 2016 03:05:05 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Storm-error-reading-form-kafka-after-upgrade-to-HDP2-5/m-p/146704#M109259</guid>
      <dc:creator>abhishek_chama1</dc:creator>
      <dc:date>2016-12-10T03:05:05Z</dc:date>
    </item>
    <item>
      <title>Re: Storm error reading form kafka after upgrade to HDP2.5</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Storm-error-reading-form-kafka-after-upgrade-to-HDP2-5/m-p/146705#M109260</link>
      <description>&lt;P&gt;Hi &lt;A rel="user" href="https://community.cloudera.com/users/172/schintalapani.html" nodeid="172"&gt;@Sriharsha Chintalapani&lt;/A&gt;  - Where exactly is the problem in storm-kafka?  Can you link to an Apache JIRA?  I cannot get a kerberized cluster to read from a non-kerberized Kafka - even with 1.0.1.2.5.3.0-37. It does look like Storm is requesting a SASL connection to a unsecure ZooKeeper.  To note, running the 1.0.1.2.5.3.0-37 topology in local mode does connect to the unsecure ZooKeeper and read the data from the topic.  Although KafkaConfig.securityProtocol = "PLAINTEXT"; I set this explicitly as well. &lt;/P&gt;&lt;P&gt;Here is the underlying exception that bubbles up NoNodeException:&lt;/P&gt;&lt;PRE&gt;java.lang.RuntimeException: java.lang.RuntimeException: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /brokers/topics/UserJsonString/partitions
        at org.apache.storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:101) ~[stormjar.jar:?]
        at org.apache.storm.kafka.trident.ZkBrokerReader.&amp;lt;init&amp;gt;(ZkBrokerReader.java:44) ~[stormjar.jar:?]
        at org.apache.storm.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:64) ~[stormjar.jar:?]
        at org.apache.storm.kafka.KafkaSpout.open(KafkaSpout.java:78) ~[stormjar.jar:?]
        at org.apache.storm.daemon.executor$fn__6505$fn__6520.invoke(executor.clj:607) ~[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
        at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:482) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
        at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_73]
Caused by: java.lang.RuntimeException: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /brokers/topics/UserJsonString/partitions
        at org.apache.storm.kafka.DynamicBrokersReader.getNumPartitions(DynamicBrokersReader.java:115) ~[stormjar.jar:?]
        at org.apache.storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:85) ~[stormjar.jar:?]
        ... 7 more
Caused by: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /brokers/topics/UserJsonString/partitions
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:123) ~[zookeeper-3.4.6.2.5.3.0-37.jar:3.4.6-37--1]
        at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) ~[zookeeper-3.4.6.2.5.3.0-37.jar:3.4.6-37--1]
        at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1590) ~[zookeeper-3.4.6.2.5.3.0-37.jar:3.4.6-37--1]
        at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:214) ~[stormjar.jar:?]
        at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:203) ~[stormjar.jar:?]
        at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:108) ~[stormjar.jar:?]
        at org.apache.curator.framework.imps.GetChildrenBuilderImpl.pathInForeground(GetChildrenBuilderImpl.java:200) ~[stormjar.jar:?]
        at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:191) ~[stormjar.jar:?]
        at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:38) ~[stormjar.jar:?]
        at org.apache.storm.kafka.DynamicBrokersReader.getNumPartitions(DynamicBrokersReader.java:112) ~[stormjar.jar:?]
        at org.apache.storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:85) ~[stormjar.jar:?]
        ... 7 more
2016-12-27 18:18:14.879 o.a.s.util [ERROR] Halting process: ("Worker died")
java.lang.RuntimeException: ("Worker died")
        at org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
        at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.7.0.jar:?]
        at org.apache.storm.daemon.worker$fn__7178$fn__7179.invoke(worker.clj:765) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
        at org.apache.storm.daemon.executor$mk_executor_data$fn__6390$fn__6391.invoke(executor.clj:275) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
        at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:494) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37]
        at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?]
        at java.lang.Thread.run(Thread.java:745) [?:1.8.0_73]

&lt;/PRE&gt;</description>
      <pubDate>Wed, 28 Dec 2016 22:22:43 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Storm-error-reading-form-kafka-after-upgrade-to-HDP2-5/m-p/146705#M109260</guid>
      <dc:creator>kristopher_kane</dc:creator>
      <dc:date>2016-12-28T22:22:43Z</dc:date>
    </item>
    <item>
      <title>Re: Storm error reading form kafka after upgrade to HDP2.5</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Storm-error-reading-form-kafka-after-upgrade-to-HDP2-5/m-p/146706#M109261</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/14963/kristopherkane.html" nodeid="14963"&gt;@Kristopher Kane&lt;/A&gt; &lt;/P&gt;&lt;P&gt;	The issue is not related to this. Although I suggest, you use the above artifactId. &lt;/P&gt;&lt;P&gt;	Your issue is more likely due to the storm configuration issue. You are running a storm worker in secure mode that means Ambari passes &lt;STRONG&gt;-D&lt;/STRONG&gt;&lt;STRONG&gt;java.security.auth.login.config=/etc/storm/conf/storm_jaas.conf&lt;/STRONG&gt; as part of worker.childopts . Usually, this storm_jaas.conf contains the jaas section for "Client" this section used by zookeeper client to connect to a secure zookeeper and your unsecure zookeeper won't be able to authenticate a secure client hence the issue. &lt;/P&gt;&lt;P&gt; remove this param &lt;STRONG&gt;-D&lt;/STRONG&gt;&lt;STRONG&gt;java.security.auth.login.config=/etc/storm/conf/storm_jaas.conf &lt;/STRONG&gt;from your worker.child.opts via Ambari-&amp;gt;Storm-&amp;gt;Confg . Restart the cluster and try.&lt;/P&gt;</description>
      <pubDate>Mon, 02 Jan 2017 09:00:52 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Storm-error-reading-form-kafka-after-upgrade-to-HDP2-5/m-p/146706#M109261</guid>
      <dc:creator>schintalapani</dc:creator>
      <dc:date>2017-01-02T09:00:52Z</dc:date>
    </item>
    <item>
      <title>Re: Storm error reading form kafka after upgrade to HDP2.5</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Storm-error-reading-form-kafka-after-upgrade-to-HDP2-5/m-p/146707#M109262</link>
      <description>&lt;P&gt;Does that mean that the topology level configuration 'KafkaConfig.securityProtocol = "PLAINTEXT";' is not respected? &lt;/P&gt;</description>
      <pubDate>Tue, 03 Jan 2017 23:38:42 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Storm-error-reading-form-kafka-after-upgrade-to-HDP2-5/m-p/146707#M109262</guid>
      <dc:creator>kristopher_kane</dc:creator>
      <dc:date>2017-01-03T23:38:42Z</dc:date>
    </item>
    <item>
      <title>Re: Storm error reading form kafka after upgrade to HDP2.5</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Storm-error-reading-form-kafka-after-upgrade-to-HDP2-5/m-p/146708#M109263</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/14963/kristopherkane.html" nodeid="14963"&gt;@Kristopher Kane&lt;/A&gt; securityProtocol is for connecting to brokers. Its not used by zookeeper client library curator. Curator checks if there is a jaas file provided for the JVM and if it has Client section in it . If so it tries to connect to zookeeper through secure channel. As I said in my previous comment make those changes to connect to non-secure  cluster.&lt;/P&gt;</description>
      <pubDate>Tue, 03 Jan 2017 23:45:39 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Storm-error-reading-form-kafka-after-upgrade-to-HDP2-5/m-p/146708#M109263</guid>
      <dc:creator>schintalapani</dc:creator>
      <dc:date>2017-01-03T23:45:39Z</dc:date>
    </item>
    <item>
      <title>Re: Storm error reading form kafka after upgrade to HDP2.5</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Storm-error-reading-form-kafka-after-upgrade-to-HDP2-5/m-p/146709#M109264</link>
      <description>&lt;P&gt;Yes, I was wrong about securityProtocol and fixated on it too long.  We altered cluster level worker.childopts and our ZK connections are running as plaintext.  However, now we have noticed that the new offset lag monitor also is trying to SASL connect to ZK.  I didn't realize this at first as that is a new component to me. Simply overlooked it. &lt;/P&gt;</description>
      <pubDate>Fri, 06 Jan 2017 12:55:00 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Storm-error-reading-form-kafka-after-upgrade-to-HDP2-5/m-p/146709#M109264</guid>
      <dc:creator>kristopher_kane</dc:creator>
      <dc:date>2017-01-06T12:55:00Z</dc:date>
    </item>
  </channel>
</rss>

