Member since
06-27-2019
147
Posts
9
Kudos Received
11
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2460 | 01-31-2022 08:42 AM | |
628 | 11-24-2021 12:11 PM | |
1057 | 11-24-2021 12:05 PM | |
1989 | 10-08-2019 10:00 AM | |
2516 | 10-07-2019 12:08 PM |
09-25-2019
06:59 AM
@Peruvian81 Hi, If you connect to the zookeeper cli using: ZK_HOME/zookeeper-client/bin/zkCli.sh -server <zkHost>:<zkPort> Then you can run: get /brokers/ids/<brokerID> and check in the "endpoints" where the kafka brokers are listening. Then try using that security-protocol ip:port to connect to the brokers. Also, make sure that the topic has all their replicas in sync by running "describe" command line. Let us know how it goes.
... View more
09-13-2019
12:54 PM
1 Kudo
@mike_bronson7 can you check if the meta.properties file usually located under /kafka-logs has the right value? this should contain broker.id=1 We can't add ids manually to zookeeper, these are ephemeral nodes and they're created every time that a broker registers itself in zookeeper. Can you provide the startup lines from the server.log? Thanks.
... View more
09-09-2019
12:14 PM
Hi @iamabug Before running the consumer, could you please add to your terminal: export KAFKA_OPTS="-Dsun.security.krb5.debug=true" After that, share the DEBUG info for further review. Thanks.
... View more
09-09-2019
12:06 PM
Hello @nhemamalini Are you able to send/consume data to/from the topics by using the Kafka command line? Producer: bin/kafka-console-producer.sh --broker-list node1:6667 --topic <topicName> Consumer: bin/kafka-console-consumer.sh --bootstrap-server node1:6667 --topic <topicName> --from-beginning Thanks.
... View more
08-23-2019
12:50 PM
Hi @BaluSaiD, If the topics you're producing/consuming data to/from have at least 2/3 in-sync replicas and min.insync.replicas=2, then it should be ok. If some topics have just 1 replica, and this broker dies, then you will not be able to produce/consume data from this topic. Properties to keep in mind: 1. Server-side: min.insync.replicas: When a producer sets acks to "all" (or "-1"), min.insync.replicas specify the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write. 2. Producer Side: ack: The number of acknowledgments the producer requires the leader to have received before considering a request complete. This controls the durability of records that are sent. The following settings are allowed: acks=0 If set to zero then the producer will not wait for any acknowledgment from the server at all. The record will be immediately added to the socket buffer and considered sent. No guarantee can be made that the server has received the record in this case, and the retries configuration will not take effect (as the client won't generally know of any failures). The offset given back for each record will always be set to -1. acks=1 This will mean the leader will write the record to its local log but will respond without awaiting full acknowledgment from all followers. In this case, should the leader fail immediately after acknowledging the record but before the followers have replicated it then the record will be lost. acks=all This means the leader will wait for the full set of in-sync replicas to acknowledge the record. This guarantees that the record will not be lost as long as at least one in-sync replica remains alive. This is the strongest available guarantee. This is equivalent to the acks=-1 setting. Regards, Manuel.
... View more
08-20-2019
02:17 PM
Hello, Regarding to your questions: 1. Does the FULL CLUSTER RESTART distribute the topic partitions evenly across 5 nodes? All the topics already created will remain the same. Or need to run the partition reassignment tool kafka-reassign-partitions.sh ? Yes, this is the available approach. 2. Doesn't Kafka have an internal mechanism to automatically reassign the partitions unless running kafka-reassign-partitions.sh Unfortunately, this task has to be done manually by using the reassign partition tool. You can check how this tool works by looking at the video article below: https://www.youtube.com/watch?v=TjKtCKsUjbQ Hope this helps. Regards, Manuel.
... View more
08-20-2019
01:49 PM
Hi Mike, The controller is responsible for administrative operations, including assigning partitions to brokers and monitoring for broker failures. If we are getting no leaders assigned, one thing you can check is the controller status in the zookeeper cli. To perform ZooKeeper CLI operations you can use the ZooKeeper client by "bin/zkCli.sh" after that execute: get /controller and see if the output is showing an active controller. If the output is showing "null" you can run rmr /controller, this will trigger a new controller assignment. Finally, make sure you don't have authorization issues by checking the server.log files during broker restart. Regards, Manuel.
... View more
08-20-2019
01:22 PM
Hi Saurav, Can you please enable client DEBUG and see if the security.protocol is passed properly? also, it may help you also to get more detailed information about the timeout issue. This can be done by editing below file: /etc/kafka/conf/tools-log4j.properties Change the following line from: log4j.rootLogger=WARN, stderr
To
log4j.rootLogger=DEBUG, stderr After that, run the consumer again and let us know how it goes. Thanks, Manuel.
... View more
06-07-2018
03:39 PM
In order to list consumers, you can use: /usr/hdp/current/kafka-broker/bin/kafka-consumer-groups.sh --zookeeper <zkHost>:<zkPort> --list Note: This will only show information about consumers that use ZooKeeper (not those using the Java consumer API). /usr/hdp/current/kafka-broker/bin/kafka-consumer-groups.sh --bootstrap-server <bokerHost:BrokerPort> --list
Note: This will only show information about consumers that use the Java consumer API (non-ZooKeeper-based consumers). If you're using kerberos: /usr/hdp/current/kafka-broker/bin/kafka-consumer-groups.sh --bootstrap-server <bokerHost:BrokerPort> --list --new-consumer --command-config /tmp/test.properties cat/tmp/test.properties security.protocol=PLAINTEXTSASL If this answers your query/issue then please mark this HCC thread as answered by clicking on "Accept" link on the correct answer, That way it will help other HCC users to quickly find the answers.
... View more
06-05-2018
08:34 PM
1 Kudo
This is because kvno has changed for that principal in kerberos database after you create a new keytab for the same principal. You can confirm the same by doing: kadmin.local: get_principal <principal_name> the kvno is different than the one in spnego.service.keytab (by doing klist -kte <keytab>) The thing that I suggest in this scenario is to "cp spnego.service.keytab user.name.keytab" Then you can provide permissions to that keytab accordingly. Hope this helps.
... View more
- « Previous
- Next »