Member since
02-01-2019
650
Posts
143
Kudos Received
117
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 3508 | 04-01-2019 09:53 AM | |
| 1814 | 04-01-2019 09:34 AM | |
| 8925 | 01-28-2019 03:50 PM | |
| 1972 | 11-08-2018 09:26 AM | |
| 4484 | 11-08-2018 08:55 AM |
05-16-2018
09:39 AM
1 Kudo
@Gulshan Agivetova You can remove the directory from this property through ambari : dfs.datanode.data.dir and do a rolling restart of datanodes. Make sure you don't have any missing/under-replicated blocks before restarting the new datanode.
... View more
05-13-2018
06:29 PM
@Michael Bronson You may want to check the logs(server.log) when kafka process stops before changing any existing configs.
... View more
05-10-2018
04:28 PM
@J Koppole, Since you are using kerberos, replace localhost with FQDN and re-run the command.
... View more
05-06-2018
06:43 PM
1 Kudo
@Mudit Kumar, You can refer this document : https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_security/content/configuring_amb_hdp_for_kerberos.html
... View more
05-03-2018
10:32 AM
Can you guys check if you see the below process in your nodemanager machines? /tmp/java -c /tmp/h.conf
... View more
05-02-2018
05:59 PM
@Rajesh Reddy yes, the procedure would be same. As an alternative you can also look at : https://community.hortonworks.com/content/supportkb/151087/how-to-move-kafka-partition-log-directory-within-a.html
... View more
05-02-2018
05:50 PM
1 Kudo
@Rajesh Reddy We don't any way to reset the consumer offsets in kafka 0.10.x. You may want to try the instructions provided here : https://gist.github.com/marwei/cd40657c481f94ebe273ecc16601674b
... View more
05-02-2018
05:29 PM
1 Kudo
@Rajesh Reddy
Balancing data in kafka is not same as HDFS. You'd need to use partition re-assignment tool to move the partitions to new brokers. Below is the detailed explanation from kafka documentation:
Adding servers to a Kafka cluster is easy, just assign them a unique broker id and start up Kafka on your new servers. However these new servers will not automatically be assigned any data partitions, so unless partitions are moved to them they won't be doing any work until new topics are created. So usually when you add machines to your cluster you will want to migrate some existing data to these machines.
The process of migrating data is manually initiated but fully automated. Under the covers what happens is that Kafka will add the new server as a follower of the partition it is migrating and allow it to fully replicate the existing data in that partition. When the new server has fully replicated the contents of this partition and joined the in-sync replica one of the existing replicas will delete their partition's data.
The partition reassignment tool can be used to move partitions across brokers. An ideal partition distribution would ensure even data load and partition sizes across all brokers. The partition reassignment tool does not have the capability to automatically study the data distribution in a Kafka cluster and move partitions around to attain an even load distribution. As such, the admin has to figure out which topics or partitions should be moved around.
... View more
04-27-2018
08:21 AM
@raj pati I believe communication issue was resolved after removing the external jars from hbase libs.
... View more
04-25-2018
12:53 PM
1 Kudo
@Kashif Amir TTL has been changed to 20 seconds and post 20 seconds your data will be expired. The space in HDFS will not be reclaimed yet. Once you run the major compaction on that table you should be able to reclaim the space on hdfs.
... View more