Member since
09-23-2015
81
Posts
108
Kudos Received
41
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5770 | 08-17-2017 05:00 PM | |
2302 | 06-03-2017 09:07 PM | |
2820 | 03-29-2017 06:02 PM | |
5340 | 03-07-2017 06:16 PM | |
1987 | 02-26-2017 06:30 PM |
01-03-2017
03:45 PM
1 Kudo
@Kristopher Kane securityProtocol is for connecting to brokers. Its not used by zookeeper client library curator. Curator checks if there is a jaas file provided for the JVM and if it has Client section in it . If so it tries to connect to zookeeper through secure channel. As I said in my previous comment make those changes to connect to non-secure cluster.
... View more
01-02-2017
01:00 AM
@Kristopher Kane The issue is not related to this. Although I suggest, you use the above artifactId. Your issue is more likely due to the storm configuration issue. You are running a storm worker in secure mode that means Ambari passes -Djava.security.auth.login.config=/etc/storm/conf/storm_jaas.conf as part of worker.childopts . Usually, this storm_jaas.conf contains the jaas section for "Client" this section used by zookeeper client to connect to a secure zookeeper and your unsecure zookeeper won't be able to authenticate a secure client hence the issue. remove this param -Djava.security.auth.login.config=/etc/storm/conf/storm_jaas.conf from your worker.child.opts via Ambari->Storm->Confg . Restart the cluster and try.
... View more
12-09-2016
05:47 PM
2 Kudos
@Abhishek Reddy Chamakura This is a known issue which got fixed recently. Please change your storm-kafka dependency to the following and give it a try.
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka</artifactId>
<version>1.0.1.2.5.3.0-37</version>
</dependency>
... View more
12-07-2016
10:02 AM
@Dhiraj Sardana This is a known issue which got fixed recently. Please change your storm-kafka dependency to the following and give it a try.
<dependency>
<groupId>org.apache.storm</groupId>
<artifactId>storm-kafka</artifactId>
<version>1.0.1.2.5.3.0-37</version>
</dependency>
... View more
12-02-2016
06:40 PM
2 Kudos
@amankumbare can you make sure you are passing the zookeeper root as well to your kafka-topics.sh. It should look like this ./bin/kafka-topics.sh --zookeeper
node1.example.com:2181,node2.example.com:2181,node1.example.com:2181/kafka --topic name_of_topic --partitions 2 --replication-factor 2 --create
... View more
10-13-2016
04:01 AM
4 Kudos
@Davide Vergari This might be the side effect of several things. We did a shading of all the storm dependencies so that topologies can bring their own version of common dependencies and storm libs wouldn't conflict with the user's topology dependencies. easy fix would be to add to your topology following dependency <dependency>
<groupId>org.apache.curator</groupId>
<artifactId>curator-framework</artifactId>
<version>2.10.0</version>
<exclusions>
<exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
<exclusion>
<groupId>org.jboss.netty</groupId>
<artifactId>netty</artifactId>
</exclusion>
</exclusions>
</dependency>
... View more
09-06-2016
06:14 PM
1 Kudo
Kafka MirrorMaker is designed for the sole purpose of replicating kafka's topi cdata from one data center to another. Pros: 1. Simple to setup 2. Uses Kafka's produce and consumer api. Makes it easier to enable wire-encryption(SSL) and Keberos (Nifi can offer the same as they both use the same API). 3. Designed to replicate all the topics in source to target data center . Users can also choose and pick specific topic if they desired so. Cons: 1. Hard to monitor. As the mirror maker is just a JVM process ,provisioning and monitoring the mirror maker process can be hard. One need to monitor the metrics coming from mirrormaker to see if there is any lag or no data being produced into target cluster. 2. MirrorMakers won't keep the origin Kafka topic offsets into target cluster ( Nifi or any other solution will run into the same limitation). As writing a new message into the target data center creates a new offset.
... View more
08-26-2016
09:32 PM
HDP kafka has a patch for old client api (consumer, producer) to work with kerberorized kafka cluster. So you need to make sure the dependencies you set are coming from the hortonworks maven repo <dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.10</artifactId>
<version>0.8.2.0</version>
");"><exclusions>
");"><exclusion>
<groupId>org.apache.zookeeper</groupId>
<artifactId>zookeeper</artifactId>
</exclusion>
");"><exclusion>
<groupId>log4j</groupId>
<artifactId>log4j</artifactId>
</exclusion>
</exclusions>
</dependency> The above is from Apache. You can use this http://repo.hortonworks.com/content/groups/public/org/apache/kafka/kafka_2.10/0.9.0.2.4.2.8-3/ instead of the above. Add repo.hortonworks.com as one of the maven repo and use the version 0.9.0.2.4.2.8-3 instead of 0.8.2.0 like above.
... View more
08-26-2016
06:51 PM
@Geetha Anne can you post your topology pom.xml and the kafka dependency. Are you sure you are using HDP kafka dependencies in your topology?
... View more
08-23-2016
02:21 PM
@Shun Takebayashi We will be backporting it for next maint release fro 2.3 and 2.4
... View more