- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Storm error reading form kafka after upgrade to HDP2.5
Created ‎12-09-2016 04:52 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
We are getting the following error while running a storm topology reading from kafka after upgrading to hdp2.5. This used to work fine with hdp2.3. (Non kerberized kafka, version - 0.8)
Unable to get offset lags for kafka. Reason: org.apache.zookeeper.KeeperException$NoNodeException: KeeperErrorCode = NoNode for /brokers/topics/User/partitions at org.apache.zookeeper.KeeperException.create(KeeperException.java:111) at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1590) at org.apache.curator.shaded.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:214) at org.apache.curator.shaded.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:203) at org.apache.curator.shaded.RetryLoop.callWithRetry(RetryLoop.java:108) at org.apache.curator.shaded.framework.imps.GetChildrenBuilderImpl.pathInForeground(GetChildrenBuilderImpl.java:200) at org.apache.curator.shaded.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:191) at org.apache.curator.shaded.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:38) at org.apache.storm.kafka.monitor.KafkaOffsetLagUtil.getLeadersAndTopicPartitions(KafkaOffsetLagUtil.java:317) at org.apache.storm.kafka.monitor.KafkaOffsetLagUtil.getOffsetLags(KafkaOffsetLagUtil.java:254) at org.apache.storm.kafka.monitor.KafkaOffsetLagUtil.main(KafkaOffsetLagUtil.java:127)
Any help would be appreciated.
Thanks
Created ‎12-09-2016 05:47 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This is a known issue which got fixed recently. Please change your storm-kafka dependency to the following and give it a try.
<dependency> <groupId>org.apache.storm</groupId> <artifactId>storm-kafka</artifactId> <version>1.0.1.2.5.3.0-37</version> </dependency>
Created ‎12-09-2016 05:47 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This is a known issue which got fixed recently. Please change your storm-kafka dependency to the following and give it a try.
<dependency> <groupId>org.apache.storm</groupId> <artifactId>storm-kafka</artifactId> <version>1.0.1.2.5.3.0-37</version> </dependency>
Created ‎12-09-2016 07:05 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for the reply. I tried this and still getting the same error.
Created ‎12-28-2016 02:22 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @Sriharsha Chintalapani - Where exactly is the problem in storm-kafka? Can you link to an Apache JIRA? I cannot get a kerberized cluster to read from a non-kerberized Kafka - even with 1.0.1.2.5.3.0-37. It does look like Storm is requesting a SASL connection to a unsecure ZooKeeper. To note, running the 1.0.1.2.5.3.0-37 topology in local mode does connect to the unsecure ZooKeeper and read the data from the topic. Although KafkaConfig.securityProtocol = "PLAINTEXT"; I set this explicitly as well.
Here is the underlying exception that bubbles up NoNodeException:
java.lang.RuntimeException: java.lang.RuntimeException: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /brokers/topics/UserJsonString/partitions at org.apache.storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:101) ~[stormjar.jar:?] at org.apache.storm.kafka.trident.ZkBrokerReader.<init>(ZkBrokerReader.java:44) ~[stormjar.jar:?] at org.apache.storm.kafka.KafkaUtils.makeBrokerReader(KafkaUtils.java:64) ~[stormjar.jar:?] at org.apache.storm.kafka.KafkaSpout.open(KafkaSpout.java:78) ~[stormjar.jar:?] at org.apache.storm.daemon.executor$fn__6505$fn__6520.invoke(executor.clj:607) ~[storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37] at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:482) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37] at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_73] Caused by: java.lang.RuntimeException: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /brokers/topics/UserJsonString/partitions at org.apache.storm.kafka.DynamicBrokersReader.getNumPartitions(DynamicBrokersReader.java:115) ~[stormjar.jar:?] at org.apache.storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:85) ~[stormjar.jar:?] ... 7 more Caused by: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /brokers/topics/UserJsonString/partitions at org.apache.zookeeper.KeeperException.create(KeeperException.java:123) ~[zookeeper-3.4.6.2.5.3.0-37.jar:3.4.6-37--1] at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) ~[zookeeper-3.4.6.2.5.3.0-37.jar:3.4.6-37--1] at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1590) ~[zookeeper-3.4.6.2.5.3.0-37.jar:3.4.6-37--1] at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:214) ~[stormjar.jar:?] at org.apache.curator.framework.imps.GetChildrenBuilderImpl$3.call(GetChildrenBuilderImpl.java:203) ~[stormjar.jar:?] at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:108) ~[stormjar.jar:?] at org.apache.curator.framework.imps.GetChildrenBuilderImpl.pathInForeground(GetChildrenBuilderImpl.java:200) ~[stormjar.jar:?] at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:191) ~[stormjar.jar:?] at org.apache.curator.framework.imps.GetChildrenBuilderImpl.forPath(GetChildrenBuilderImpl.java:38) ~[stormjar.jar:?] at org.apache.storm.kafka.DynamicBrokersReader.getNumPartitions(DynamicBrokersReader.java:112) ~[stormjar.jar:?] at org.apache.storm.kafka.DynamicBrokersReader.getBrokerInfo(DynamicBrokersReader.java:85) ~[stormjar.jar:?] ... 7 more 2016-12-27 18:18:14.879 o.a.s.util [ERROR] Halting process: ("Worker died") java.lang.RuntimeException: ("Worker died") at org.apache.storm.util$exit_process_BANG_.doInvoke(util.clj:341) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37] at clojure.lang.RestFn.invoke(RestFn.java:423) [clojure-1.7.0.jar:?] at org.apache.storm.daemon.worker$fn__7178$fn__7179.invoke(worker.clj:765) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37] at org.apache.storm.daemon.executor$mk_executor_data$fn__6390$fn__6391.invoke(executor.clj:275) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37] at org.apache.storm.util$async_loop$fn__554.invoke(util.clj:494) [storm-core-1.0.1.2.5.3.0-37.jar:1.0.1.2.5.3.0-37] at clojure.lang.AFn.run(AFn.java:22) [clojure-1.7.0.jar:?] at java.lang.Thread.run(Thread.java:745) [?:1.8.0_73]
Created ‎01-02-2017 01:00 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The issue is not related to this. Although I suggest, you use the above artifactId.
Your issue is more likely due to the storm configuration issue. You are running a storm worker in secure mode that means Ambari passes -Djava.security.auth.login.config=/etc/storm/conf/storm_jaas.conf as part of worker.childopts . Usually, this storm_jaas.conf contains the jaas section for "Client" this section used by zookeeper client to connect to a secure zookeeper and your unsecure zookeeper won't be able to authenticate a secure client hence the issue.
remove this param -Djava.security.auth.login.config=/etc/storm/conf/storm_jaas.conf from your worker.child.opts via Ambari->Storm->Confg . Restart the cluster and try.
Created ‎01-03-2017 03:38 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Does that mean that the topology level configuration 'KafkaConfig.securityProtocol = "PLAINTEXT";' is not respected?
Created ‎01-03-2017 03:45 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@Kristopher Kane securityProtocol is for connecting to brokers. Its not used by zookeeper client library curator. Curator checks if there is a jaas file provided for the JVM and if it has Client section in it . If so it tries to connect to zookeeper through secure channel. As I said in my previous comment make those changes to connect to non-secure cluster.
Created ‎01-06-2017 04:55 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes, I was wrong about securityProtocol and fixated on it too long. We altered cluster level worker.childopts and our ZK connections are running as plaintext. However, now we have noticed that the new offset lag monitor also is trying to SASL connect to ZK. I didn't realize this at first as that is a new component to me. Simply overlooked it.
