Member since
11-17-2015
7
Posts
6
Kudos Received
0
Solutions
09-20-2016
10:27 PM
1 Kudo
The below script helps to compare the configs. https://github.com/Symantec/config-compare
... View more
07-27-2016
02:35 PM
@Amber Kulkarni : yes you are correct stream name acts as stream id.
... View more
03-19-2016
02:07 AM
3 Kudos
Pragmatic Kafka Security 0.9, Setup and Java Producer
Jump Start Kafka Security implementation. SSL for Authentication, ACL for Authorization The below steps will provide 4 vms with Kafka Zookeeper installed on all of them via Vagrant SSL authentication will be enabled between the Consumers and Brokers ACL is also enabled Prerequisites
Vagrant : Vagrant installation https://www.vagrantup.com/downloads.html Maven Basic Java / Kafka Understanding. git Note
New Console Consumer adds Group.id by default. Main Commands. Vagrant suspend, Vagrant resume --no-provision , Vagrant destroy To clean up all, just run vagrant destroy -f (everything will get cleaned) Image Installation and Running (install commands) Step1
git clone https://github.com/Symantec/kafka-security-0.9.git cd kafka-security-demo Run Start.sh and go for coffee or just read along documentation ( 10 - 15 min)
(internally runs sh /vagrant/data/step1-all.sh => update software, install java, kafka, zoo) (internally runs sh /vagrant/data/step2server.sh => Become CA root, generate public and private key) (internally runs sh /vagrant/data/step3client.sh => generates ca request and puts in shared folder /vagrant/data) Step 2 :
open a new terminal same path ($PWD), run the below commands
vagrant ssh c7001 => Login to Box sudo su => Login as root sh /vagrant/data/step4Server.sh =>Sign the cert-request from C700* and put signed request to /vagrant/data/ and also copy the root-ca
Edit the file, update the hostname for each client, default to c7002 In the New terminal same path ($PWD),
vagrant ssh c7002,3,4 (one bo) =>Login to Box sudo su =>Login as root sh /vagrant/data/step5client.sh =>Install both root Ca and signed Certificate Step 3 :
start Zookeeper on server
sh zookeeper-3.4.8/bin/zkServer.sh start start kafka on server sh kafka_2.11-0.9.0.1/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --operation All --allow-principal User:*--allow-host 192.168.70.101 --add --cluster
This will allow local server machine all ACL
nohup sh kafka_2.11-0.9.0.1/bin/kafka-server-start.sh kafka_2.11-0.9.0.1/config/server.properties & (Run in background)
Create Topic sh kafka_2.11-0.9.0.1/bin/kafka-topics.sh --create --zookeeper 192.168.70.101:2181 --replication-factor 1 --partitions 1 --topic test
sh kafka_2.11-0.9.0.1/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --operation Write --allow-principal User:* --allow-host 192.168.70.101 --add --topic test
Enter data, Two Options
manual Producer sh kafka_2.11-0.9.0.1/bin/kafka-console-producer.sh --broker-list 192.168.70.101:9093 --topic test --producer.config securityDemo/producer.properties
* Java Producer, Go outside the Vagrant box
mvn clean package
cp src/main/resources/Producer.Properties data/
cp target/kafka-security-demo-1.0.0-jar-with-dependencies.jar data/
* Login into Server, Vagrant ssh c7001 and run below
java -cp /vagrant/data/kafka-security-demo-1.0.0-jar-with-dependencies.jar com.symantec.cpe.KafkaProducer /vagrant/data/Producer.Properties
* Allow c7002 to read data
sh kafka_2.11-0.9.0.1/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --operation Read --allow-principal User:* --allow-host 192.168.70.102 --add --topic test --group group102
Consumer
On the client c7002 Add Consumer group
vim securityDemo/producer.properties group.id=group102 Run the new consumer sh kafka_2.11-0.9.0.1/bin/kafka-console-consumer.sh --bootstrap-server c7001.symantec.dev.com:9093 --topic test --from-beginning --new-consumer --consumer.conf securityDemo/producer.properties
List important functions with example commands
sh kafka_2.11-0.9.0.1/bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --list sh kafka_2.11-0.9.0.1/bin/kafka-topics.sh --list --zookeeper localhost:2181 Contributions Narendra Bidari
Mahipal References, Additional Information
https://msdn.microsoft.com/en-us/library/windows/desktop/aa382479(v=vs.85).aspx http://stackoverflow.com/questions/35653128/kafka-ssl-security-setup-causing-issue http://kafka.apache.org/documentation.html#security_ssl https://www.sslshopper.com/article-most-common-java-keytool-keystore-commands.html https://chrisjean.com/change-timezone-in-centos/ http://docs.confluent.io/2.0.0/kafka/ssl.html https://developer.ibm.com/messaging/2016/03/03/message-hub-kafka-java-api/ http://kafka.apache.org/documentation.html#security_authz https://cwiki.apache.org/confluence/display/KAFKA/Security
... View more
Labels:
01-18-2016
06:50 PM
@Jeremy Dyer : Thanks for the answer. I now understand clientId is not same as groupId. I could not get the second part of the answer. My Understanding : If we are consuming data from Kafka/zookeeper, it maintains an offset in zookeeper under some folders like transactional or consumers with group id In tridentKafkaConfig, there is no option to specify groupId at all, is groupId same as StreamId, if so where is its offset saved in zookeeper? I don't see any offset on the source Kafka/zookeeper, (In zookeeper folder /transactional).
... View more
01-14-2016
07:22 PM
1 Kudo
GroupID/ClientId : I am reading from Kafka via Trident kafka spout (Opaque Transactional Spout), On Restart If I change the clientID (passed into Tridentkafka Config) I don't see that my spout reading data from initial data point Is clientId same as groupid? But If I change stream name, spout starts getting data from beginning. https://github.com/apache/storm/blob/master/external/storm-kafka/src/jvm/org/apache/storm/kafka/trident/TridentKafkaConfig.java @Ram Sriharsha
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache Storm
12-08-2015
02:01 AM
1 Kudo
Problem : Read from sharded rabbitmq and write to kafka with only once semantics. ==> I am planning to use Trident for the same. I have used RabbitMq spout (storm) which has implemented BaseRichSpout (https://github.com/ppat/storm-rabbitmq) I can see that it is a non transactional spout, and hence "would not" support only once semantics of Trident. Am I correct? If I need to build an OpaqueTransactionalSpout for Rabbitmq, Can you give me some hints on how to start or some examples to look into I wanted to write unit test to check if my Trident Topology supports "Exactly once semantic". Are there any sample Unit tests/examples. The reason I want to write the tests because "Exactly once semantics" depend on Spout also not just on Trident or Storm with my understanding. I have to support sharded Rabbitmq and hence how to connect to more than one queue. I am thinking of using Stream.merge(List<Streams>) after creating once stream for each queue, Is this good idea to do?. Let me know your thoughts on the same. https://nathanmarz.github.io/storm/doc/storm/tride... @Ram Sriharsha
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache Storm