Member since
06-13-2016
21
Posts
16
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4472 | 09-13-2017 05:19 AM | |
2379 | 05-11-2017 10:59 AM | |
1305 | 01-11-2017 09:06 AM | |
2505 | 09-23-2016 06:30 PM | |
5288 | 09-08-2016 05:23 AM |
09-13-2017
05:19 AM
1 Kudo
@mliem this looks like authorization issue. we need to add ACLs for user alice
... View more
05-11-2017
10:59 AM
1 Kudo
@Sebastian No, We have to migrate one node at a time
... View more
01-11-2017
09:06 AM
1 Kudo
@Kristopher Kane Apache Kafka's old scala based clients does not support security (SASL/SSL) features. Only new java based clients support security. In HDP, we patched the old scala clients to support security. So HDP 2.5.3.0-37 kafka-storm depends on HDP Kafka. We suggest you to use HDP Kafka for security features. Yes, If are unable to use HDP Kafka , then you have to use apache artifacts for both Storm-kafka and Kafka ,
... View more
10-10-2016
05:41 AM
We don't have any script/tool. But during server startup, Kafka dumps all the configs to log file.
... View more
10-06-2016
01:18 PM
1 Kudo
looks like your host name is not getting resolved. Pl check your host name. Also enable producer logs for more info.
... View more
10-04-2016
11:21 PM
In Kafka, topic/partition data is stored in data directories. These directory locations are configured using "log.dirs" config property. We can configure one (or) more directory locations. Kafka balances the partition data directories across these given directory locations. Normally we start with one directory location. With in increase in data size, we may need to add more disks.
We can append new directory location to existing "log.dirs" config property. After server restart, Kafka uses new directory location for new partitions. Kafka does not automatically move existing partitions directories to new directory locations. (i.e) It does not auto balance the partitions across directory locations. Some times we want to move some partition data to different location. We have two approaches for this Approach 1: Just delete existing data directory contents and configure new data directories locations
In this approach, Kafka replicates the partition data from other members of the cluster.
Complete partition data will replicated from the beginning. All the partitions are evenly allocated across directory locations. Replication time will depend on
data size. If we have huge data, replica may take more time to join the ISR. This will also put
lot of load on the network/cluster. This may cause some problems to Kafka cluster. We may see, some ISR changes and client errors. This approach should be fine for small clusters ( GBs of data) Note: In Kafka, broker-id will be stored in log.dir/meta.properties file. If we have not configured broker.id, then by-default Kafka generates a new broker-id. To avoid this, retain existing meta.properties file in log.dirs directory. Approach 2: Move partition directories to new data directory (Without coping
checkpoint files ) It is similar to above approach, but here Kafka only replicates the moved partitions. Approach 3: Move partition directories and split checkpoint files. Each data directory contains three checkpoint files namely replication-offset-checkpoint, recovery-point-offset-checkpoint and cleaner-offset-checkpoint. These files contains last committed offset, log end checkpoint and cleaner checkpoint details for the partitions available in that directory. Each of the file contains version number, no.of entires, one row for each entry. We need to copy/create these files to new directory and we need to update these files. we need to adjust the entries in both the directories (old directory and new directory). This may be tedious if we have large number of partitions. But this is the best approach if we have huge data. With this approach replicas will join quickly to ISR. Load on the cluster/network will be less.
... View more
Labels:
09-23-2016
06:30 PM
3 Kudos
LinkedIn article is an old article. Kafka documentaion recommends G1 collector. http://kafka.apache.org/documentation.html#java
... View more
09-08-2016
09:06 AM
Hi, kafka-acls.sh script is used to create the ACLs for kafka users. It's not used for zookeeper acl. As per design, Only broker users can modify the zookeeper nodes, Others can only read the zk nodes. This is to improve security around zookeeper. You can also use new consumer API, which does not depend Zookeeper. It is available in HDP 2.5. ps: you can upvote, If you are satisfied with my answer
... View more
09-08-2016
05:23 AM
2 Kudos
Hi, You have configured PLAINTEXTSASL port at Broker side. So we need to pass "--security-protocol PLAINTEXTSASL" option to kafka-console-producer.sh script. Also you need to pass required JAAS file/run kinit command. refer below doc http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.3.2/bk_secure-kafka-ambari/content/ch_secure-kafka-produce-events.html
... View more
09-07-2016
02:35 PM
2 Kudos
looks like issue is related to ZooKeeper permissions. You can try by creating new consumer group. we can use bin/zookeeper-shell.sh to verify the acl on znodes.
... View more