Community Articles

Find and share helpful community-sourced technical articles.
Labels (1)
avatar
Contributor

In Kafka, topic/partition data is stored in data directories. These directory locations are configured using "log.dirs" config property. We can configure one (or) more directory locations. Kafka balances the partition data directories across these given directory locations. Normally we start with one directory location. With in increase in data size, we may need to add more disks. We can append new directory location to existing "log.dirs" config property. After server restart, Kafka uses new directory location for new partitions. Kafka does not automatically move existing partitions directories to new directory locations. (i.e) It does not auto balance the partitions across directory locations.

Some times we want to move some partition data to different location. We have two approaches for this

Approach 1: Just delete existing data directory contents and configure new data directories locations

In this approach, Kafka replicates the partition data from other members of the cluster. Complete partition data will replicated from the beginning. All the partitions are evenly allocated across directory locations. Replication time will depend on data size. If we have huge data, replica may take more time to join the ISR. This will also put lot of load on the network/cluster. This may cause some problems to Kafka cluster. We may see, some ISR changes and client errors. This approach should be fine for small clusters ( GBs of data)

Note: In Kafka, broker-id will be stored in log.dir/meta.properties file. If we have not configured broker.id, then by-default Kafka generates a new broker-id. To avoid this, retain existing meta.properties file in log.dirs directory.

Approach 2: Move partition directories to new data directory (Without coping checkpoint files )

It is similar to above approach, but here Kafka only replicates the moved partitions.

Approach 3: Move partition directories and split checkpoint files.

Each data directory contains three checkpoint files namely replication-offset-checkpoint, recovery-point-offset-checkpoint and cleaner-offset-checkpoint. These files contains last committed offset, log end checkpoint and cleaner checkpoint details for the partitions available in that directory. Each of the file contains version number, no.of entires, one row for each entry.

We need to copy/create these files to new directory and we need to update these files. we need to adjust the entries in both the directories (old directory and new directory). This may be tedious if we have large number of partitions. But this is the best approach if we have huge data. With this approach replicas will join quickly to ISR. Load on the cluster/network will be less.

19,656 Views
0 Kudos
Comments
avatar
New Contributor

is there any tools that can help make it in the new release? It's really not that convenient. And I think I have to stop the kafka broker before the operation.