Member since
06-14-2016
69
Posts
28
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5722 | 07-30-2018 06:45 PM | |
3734 | 06-22-2018 10:28 PM | |
843 | 06-20-2018 04:29 AM | |
777 | 06-20-2018 04:24 AM | |
1898 | 06-15-2018 08:24 PM |
09-25-2018
10:49 PM
Hi @Gitanjali Bare, Could you please share the complete error you are getting? Thanks!
... View more
08-17-2018
06:28 AM
Hi @Surendra Shringi, May I know if you are connecting to a Kerberized Kafka or unsecure? For template , kindly refer: https://community.hortonworks.com/articles/57262/integrating-apache-nifi-and-apache-kafka.html Its an old article but it covers the basics in detail. Also, is Kafka in the same HDF cluster or are you using a separate Kafka cluster? Thanks!
... View more
08-16-2018
06:26 PM
@Surendra Shringi Hi, Could you please try passing the FQDN of the kafka broker in the 'Kafka Brokers property of the processor? Also may I know what version of Kafka are you trying to publish to? Thanks!
... View more
08-04-2018
08:02 PM
Hi @Vadim
Dzyuban
Thank you for detailed explanation. It was my mistake that I missed the part where you mentioned sasl mechanism as PLAIN and not GSSAPI. In that case you do not need to pass the "a1.sources.s1.kafka.consumer.security.protocol=PLAINTEXT" Regarding this error: "Unable to start PollableSourceRunner:...{name:s1, state:IDLE} counterGroup" Could you please provide complete error stack for more details on this error? Thank you!
... View more
08-03-2018
08:35 PM
@Vadim
DzyubanHi, Yes, the ServiceName is required in the Jaas when you are connecting to the secure Kafka (Kerberos enabled), for reference: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_security/content/secure-kafka-config-options.html If you do not have Kerberos on Kafka then you need not to use : a1.sources.s1.kafka.consumer.security.protocol=SASL_PLAINTEXT Instead you can just use: a1.sources.s1.kafka.consumer.security.protocol=PLAINTEXT But I am little confused when you mentioned "Flume and Kafka are running on different RHEL servers and Kafka secured with the security.protocol=SASL_PLAINTEXT" Can you verify the listeners property in your Kafka broker and let me know what security protocol is it listening to? Thank you!
... View more
07-30-2018
06:54 PM
@dhieru singh No problem at all, I am glad it helped!
... View more
07-30-2018
06:47 PM
Hi @nisrine elloumi Yes, as I suspected in the first point that it could be authorization issue. Thank you for sharing.
... View more
07-30-2018
06:45 PM
@dhieru singh
Hi,
Below are the options that you can use with ./kafka-consumer-groups.sh --reset-offsets: For a specific topic you can use --topic option instead of --all-topics. Please let me know if that answers your question. Thank you!
... View more
07-30-2018
06:34 PM
@Vadim
Dzyuban
Hi, The property kafka.consumer.auto.offset.reset comes into picture when there is no initial offset in Kafka or if the current offset does not exist any more on the server. As per the flume docs: https://flume.apache.org/FlumeUserGuide.html One workaround that I can think of is changing the kafka.consumer.group.id and restart the agent. Kindly let me know if that helps. Thank you!
... View more
07-27-2018
07:53 PM
@nisrine elloumi 1. May I know if you have Ranger enabled or not? and whether the user 'atlas' has proper permissions? If Ranger is enabled, please verify that policy is enabled for both topics: ATLAS_HOOK and ATLAS_ENTITIES. If Ranger is not enabled then atlas user should be provided ACLs for read/write operations. If you are using Ranger, kindly refer the section 'Configure Atlas Security' of the following doc for detailed explanation: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_data-governance/content/ch_hdp_data_governance_install_atlas_ambari.html 2. Kindly run the following command and provide the output: /usr/hdp/current/kafka-broker/bin/kafka-topics.sh --describe --topic ATLAS_ENTITIES --zookeeper <zk_host>:2181 Also please verify the broker name in atlas properties is correct ( as configured in listeners property in kafka). Thank you!
... View more
07-26-2018
08:29 PM
@nisrine elloumi Hi, Could you please verify the Kafka broker and port that you have configured in atlas? And since you have tagged kerberos I assume this is kerberized env, in that case could you please also check the security.protocol in atlas properties to connect to secure Kafka. Thank you!
... View more
07-11-2018
06:52 PM
Thank you @sohan kanamarlapudi. Did you set partition.assignment.strategy to null or empty string in your properties file which is being read by your spark application? Possible values for this property is range or roundrobin and default is [org.apache.kafka.clients.consumer.RangeAssignor] Reference: https://kafka.apache.org/0100/documentation.html#newconsumerconfigs Is it possible for you to share the code snippet where you have configured the Kafka consumer? (Kindly omit any sensitive information) Thanks!
... View more
07-11-2018
06:41 PM
1 Kudo
@Eric RichardsonHi,
Could you please share the port set in Listeners property in Kafka's server.properties file. Also I would recommend to use FQDN of broker instead of localhost in producer/consumer command. Thank you!
... View more
07-10-2018
08:37 PM
@sohan kanamarlapudi Hi, May I know what version of Spark and Kafka are you using? Thanks!
... View more
06-26-2018
06:25 PM
@Mahesh Glad it worked! Thanks!
... View more
06-25-2018
10:51 PM
@Mahesh Hi, Did the suggestion help? Thanks!
... View more
06-22-2018
10:28 PM
2 Kudos
@Mahesh Hi, Could you please give Kafka broker hostname:port instead of zookeeper_host:2181 in the command? The following error: WARN NetworkClient: Bootstrap broker ip-10-28-3-35.ec2.internal:2181 disconnected
Means that its unable to reach to correct Kafka broker because zookeeper information has been given. Please let me know how it goes. Thanks!
... View more
06-21-2018
07:33 PM
@Pankaj Goel Hi, Please refer section 'Bolts' of https://storm.apache.org/releases/1.2.2/Concepts.html and let me know if you have any question. Thanks!
... View more
06-20-2018
04:29 AM
@Jasper Hi, Looks like you are hitting: https://issues.apache.org/jira/browse/KAFKA-6130 and its fixed in Kafka 1.1.0. As you are using HDF 3.1.1, it comes with Kafka 1.0.0. Thank you!
... View more
06-20-2018
04:24 AM
@Jasper Hi, As far as I know verifiable consumer is designed for system testing and it emits consumer events as JSON objects. Also, --group-id is a mandatory option and even if you check the code it uses subscribe method to subscribe to partitions so I don't think we can mention specific partition. Thank you!
... View more
06-19-2018
09:20 PM
@L Nin I am glad it worked! Thanks!
... View more
06-15-2018
08:24 PM
1 Kudo
@L Ning Hi, have you given read permission to other user on Kafka service keytab? Because in the Jaas file you are using kafka service keytab: keyTab="/etc/security/keytabs/kafka.service.keytab" The recommended approach would be to use user keytab and principal in your Jaas file. Thank you!
... View more
06-15-2018
08:17 PM
@Erkan ŞİRİN Can you please try running the following command: /usr/hdp/current/kafka-broker/bin/kafka-console-consumer.sh --bootstrap-server hadooptest01.datalonga.com:6667 --topic erkan_deneme --new-consumer --from-beginning
... View more
06-13-2018
09:26 PM
@Erkan ŞİRİN May I know what Kafka version you are on? Old Consumer API used to use zookeeper to store offsets but in recent versions we do have option to enable dual commit to commit offsets to Kafka as well as zookeeper. Also could you please share your server.properties for a quick review. Thanks!
... View more
06-08-2018
07:55 AM
@sangeetha sivakumar As far as I know the byte capacity should be 80% of the total amount of heap space available to the process. The following link might help you if you need to get more details on this: http://flume.apache.org/FlumeUserGuide.html#memory-channel Thanks!
... View more
06-07-2018
11:30 PM
@sangeetha sivakumar Hi, May be you can try to match file with regex pattern to read specific files: filter.pattern For the memory issue, have you tried increasing the byte capacity of the channel? Currently I can see its: FtpAgent.channels.MemChannel.byteCapacity =6912212 Thanks!
... View more
06-07-2018
10:45 PM
@Kennon Rodrigues Hi, 1. I would recommend to run with FQDNs instead of the IP in --bootstrap-server 2. Are you facing the same issue with any test topic that you create? Could you please also describe ' __consumer_offsets' topic? 3. You can also turn on client side debugging by changing the log level to DEBUG in tools-log4j.properties file : log4j.rootLogger=DEBUG, stderr Thanks!
... View more
11-13-2017
07:45 PM
@Swaapnika Guntaka As listener port is set to 6667, this should be used in the bootstrap server in your producer code as well as if you are running console producer/consumer. Kindly run the producer with port 6667 and let us know. The error in the server log is related to the metrics. Could you please provide complete log and also server.properties. Thanks!
... View more
11-09-2017
06:32 PM
@Swaapnika Guntaka Could you please make sure what port is set in the listener property of Kafka? Also Make sure producer is producing messages or not. You can wait for a while after producing messages to see whether its timing out or not. Thanks!
... View more