I have a requirement in which we have a table which has about 100 Million records. On this table we want to enable Salting.
Inorder to enable the salting I would have to recreate the table which has 100 Million records. Can any one suggest what would be the best way to migrate these 100 million records to a different place, recreate the table with SALTING and put the data back into the SALTED table.
Also I am reading in the official documentation that "There are some cautions and difference in behavior you should be aware about when using a salted table." Can anyone guide what could be the possible difference in behavior we can see?
An UPSERT SELECT would be the easiest way to copy the data, but you would have to significantly increase the Phoenix query timeout configuration values to accomplish this. It may be easier to export the table contents as CSV/TSV and then use the CSVBulkLoad tool to load the data in the new table.
Remember that salting is essentially adding a leading "bit" of entropy to every row key. This means that every lookup via primary key now actually performs $numSaltBucket lookups whereas it would be a single lookup on a table without salt buckets. This difference in execution is entirely transparent to you as an end-user, but, depending on your queries/usage, you may see some queries taking longer than without salting.