Member since
10-04-2017
113
Posts
11
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
18193 | 07-03-2019 08:34 AM | |
2087 | 10-31-2018 02:16 AM | |
12810 | 05-11-2018 01:31 AM | |
8434 | 02-21-2018 03:25 AM | |
2925 | 02-21-2018 01:18 AM |
08-25-2021
01:07 AM
hi,adar: if both the WAL segments and the CFiles are copied duing a tablet copy,then the follower tablet will alse flushing wal data to disk when growing up to 8M,in my opinion there has no difference between master tablet and follower tablet during the reading and writing,is that right?
... View more
06-29-2021
04:40 AM
I have upgraded from 7.1.1 to 7.1.6 and "Upgrade Ranger database and apply patches" is grayed out.
... View more
07-13-2020
01:47 AM
A very late reply to this topic, just to document the similar error I had using a Kafka client from a different Kerberos realm. [2020-07-13 09:47:08,678] ERROR [Consumer clientId=consumer-1, groupId=console-consumer-57017] Connection to node -1 failed authentication due to: An error: (java.security.PrivilegedActionException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Fail to create credential. (63) - No service creds)]) occurred when evaluating SASL token received from the Kafka Broker. Kafka Client will go to AUTHENTICATION_FAILED state. (org.apache.kafka.clients.NetworkClient) Debugging showed: error code is 7 error Message is Server not found in Kerberos database crealm is REALM1.DOMAIN.COM cname is rzuidhof@REALM1.DOMAIN.COM sname is krbtgt/REALM2.DOMAIN.COM@REALM1.DOMAIN.COM Situation is a HDP cluster being access using a client on a host joined to a different (IPA) domain. No trust. This works without trust, I think trust is only needed to use accounts from a different domain but we used keytabs and interactive kinit from REALM1 in REALM2 to access services in REALM1. All that was needed to get this to work was one additional line in /etc/krb5.conf on the REALM2 servers under [domain_realm] realm1.domain.com = REALM1.DOMAIN.COM We already had under [libdefaults]: dns_lookup_realm = true dns_lookup_kdc = true We also arranged DNS forwarding, but no reverse lookups.
... View more
05-29-2020
07:14 AM
I did follow the similar steps but had the issue. I had to remove all the KTS/KMS installation start from scratch which fixed the issue but this time i added only one server first and then added the other.
... View more
12-18-2019
03:57 PM
Hi @pdev ,
Wonderful to hear that! Thanks for marking this thread as resolved!
Cheers,
Li
... View more
07-03-2019
08:34 AM
2 Kudos
@satz We were able to resolve this. We had the kerberos auth principles in default kafka group while all the broker were in a different config group. Adding the auth principles to the kafka config group has solved the issue.
... View more
05-29-2019
06:02 AM
I'm using cloudera manager, once i added the node i need to provide the url for the cloudera manager packages, once this step finish, an automatic step kicked to distribute the parcels.
... View more
12-05-2018
10:53 PM
1 Kudo
Hi, NOT_LEADER_FOR_PARTITION in kafka. when a client is trying to access a topic partition for which the given broker is not the leader. For example a producer can only send new messages to the partition that is considered the leader for a replica set of partitions. Only after the leader partition receives the messages will they be sent to the other brokers that hold the replicas of that partition. It can happen when the client/producer has stale information about which broker is the leader for a partition. In this case the NotLeaderForPartitionException is thrown by the broker that the client is connecting. Another possible reason , when a leader re-election happened recently (like when restarting brokers one by one in your case and for a short time the leader partitions for certain replicas were unavailable and new leaders were elected for the partitions that used to have a leader on the unavailable broker). Ideally it is harmless as it is retrial, It can retry and try to send request to other brokers Reference Link:http://kafka.apache.org/documentation/#replication On what basis does kafka leader election happen ? Kafka maintains a set of in-sync replicas. Only the members of this set are eligible for election as leader. It will be stored in ZK. Sharing quote from Kafka document “Each partition has one server which acts as the "leader" and zero or more servers which act as "followers". The leader handles all read and write requests for the partition while the followers passively replicate the leader. If the leader fails, one of the followers will automatically become the new leader. Each server acts as a leader for some of its partitions and a follower for others so load is well balanced within the cluster” Reference Link: https://kafka.apache.org/documentation#design_replicatedlog What are all the reasons that cause kafka partition leader election to trigger? When the active leader crashed or failed, New leader will be selected from one of the in-sync replicas If you have enabled “auto.leader.rebalance.enable=true” and if your preferred leader is not leader. It will try to make it as leader when it is in in-sync replica. Hope it helps, Let us know if you any questions Thanks Jerry
... View more
10-31-2018
02:16 AM
This is because navigator upgrade generally takes lot of time based on the number of objects and relations you have. Increasing the navigator heap size can help. Calculation of the required heap is available in cloudera site.
... View more