04-25-2019 06:38 AM
There was a need to move the kudu installation from my current cluster of AWS machines to new cluster of AWS Machines.
So I did the following steps.
Can anybody help in pointing out why this is happening.
04-29-2019 08:32 AM
Kudu doesn't support swapping a drive to a new host.
Kudu tablet servers store the consensus configuration of their tablets, and the master also stores the consensus configuration for the tablets. By moving all the servers, you changed all the hostnames, and now the cluster is in total disarray. It's possible to rewrite the consensus configuration of the tablets on the tablet servers, but I'm not sure there's currently a way to rewrite the data in the master. So, by scripting `kudu local_replica cmeta rewrite_raft_config` you could fix the tablet servers. You will need to rewrite the config of each tablet so the hostnames are mapped from the old servers to the new servers. If you do that correcty and the tablet replicas are able to elect a leader, the leader will send updated information to the master, which should cause it to update its record. I don't think too many people have ever tried anything like this, so there may be other things that need to be fixed, or it simply might not be possible to recover the cluster.
What you should have done is set up the new cluster, then transferred the data via Spark or an Impala CTAS statement, or you should have built the new cluster as an expansion of the existing one, and then decommissioned all the tablet servers of the old cluster, and then moved the master nodes one-by-one to the new cluster.
04-29-2019 08:58 AM
I will try out the options suggested by you.
Can you suggest the best practices or the some options to do the following.
04-29-2019 09:42 AM