Member since
05-09-2017
107
Posts
7
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3157 | 03-19-2020 01:30 PM | |
16249 | 11-27-2019 08:22 AM | |
8782 | 07-05-2019 08:21 AM | |
15382 | 09-25-2018 12:09 PM | |
5821 | 08-10-2018 07:46 AM |
03-24-2020
01:38 PM
@SheltonI am still getting the same error . How can i verify that the SERVER_JVMFLAGS have taken effect ? I dont see it in running config. (ps -ef | grep -i zookeeper) . I also dont see it in zoo.cfg [zk: xxx.unx.sas.com(CONNECTED) 0] addauth digest super:password [zk: xxx.unx.sas.com(CONNECTED) 1] ls /kafka kafka-acl kafka-acl-changes kafka-acl-extended kafka kafka-acl-extended-changes [zk: xxx.unx.sas.com(CONNECTED) 1] ls /kafka-acl [Group, Cluster, Topic, TransactionalId, DelegationToken] [zk: xxx.unx.sas.com(CONNECTED) 2] deleteall /kafka-acl/Topic Authentication is not valid : /kafka-acl/Topic
... View more
03-24-2020
09:46 AM
Can you explain at a high level if possible what these steps are doing and why we are doing these ? Technically there is an acl under my name and when i get a token as myself i should be able to delete the acls.
... View more
03-24-2020
06:56 AM
How can i delete an acl in zookeeper. I seen a blog which has outlines steps in hortonworks. I am not using Horton. zookeeper.set.acl - false. [desind@zookeeper1~]$ zookeeper-shell localhost:2181 rmr /kafka-acl/Topic Connecting to localhost:2181 WATCHER:: WatchedEvent state:SyncConnected type:None path:null Authentication is not valid : /kafka-acl/Topic desind@zookeeper-1~]$ zookeeper-shell localhost:2181 getAcl /kafka-acl/Topic Connecting to localhost:2181 WATCHER:: WatchedEvent state:SyncConnected type:None path:null 'world,'anyone : r 'sasl,'desind : cdrwa I need to delete the sasl,desind:cdrwa
... View more
Labels:
- Labels:
-
Apache Kafka
-
Apache Zookeeper
03-19-2020
01:30 PM
I was able to resolve this issue after a lot of work. Records in kafka had null values and S3 sink connector cannot write null values to S3 bucket and failed with this error, We were able to dig deeper when we changed the flush.size =1 and then we saw a different error and that made us check for null values. We developed a patch that fixed the issue. now the S3 connector ignores null values . I dont know why confluent SMT did not work.
... View more
02-23-2020
10:56 AM
We recently upgraded Kafka to 2.3.0 after that one of our connector is failing with the below error. org.apache.kafka.connect.errors.ConnectException: org.apache.kafka.connect.errors.SchemaProjectorException: Switch between schema-based and schema-less data is not Any ideas anyone ? I have already looked and implemented this https://issues.redhat.com/browse/DBZ-235 but no luck. Nothing has changed in the schema. We dont think its a tombstone issue. I was able to check where the connector stopped and looked at the events and its not a tombstone.
... View more
Labels:
- Labels:
-
Apache Kafka
11-27-2019
10:25 AM
Now that the move has stopped/completed. Verify if there are no under-replicated OR offline partitions and also verify if there are 3 replicas and all are in sync (depends on your replication factor) You can delete those partitions manually and just restart the broker. If replicas are out of sync they must come back in sync if unclean.leader.election is true
... View more
11-27-2019
08:22 AM
Thats wierd. So in the log.dirs specify any,any,any and see if that works Example { "partitions": [{ "topic": "foo", "partition": 1, "replicas": [1, 2, 3], "log_dirs": ["any", "any", "any"] }], "version": 1 } This is what documentation says "Broker will cancel existing movement of the replica if "any" is specified as destination log directory."
... View more
11-26-2019
07:06 PM
Yes. What we are doing here is that we are stopping the reassignment that is not progressing. And to do that you run it with the original json file and it will stop and revert back.
... View more
11-26-2019
06:38 PM
To stop the move run the reassignment again with --execute option using the original {"version":1,"partitions":[{"topic":"XXXXX","partition":1,"replicas":[3,0,8],"log_dirs":["/data1/kafka","/data3/kafka","/data1/kafka"]}]} Then what you can do is instead of moving it on the same broker try to move the partition to a different broker
... View more