Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Phoenix data migration from HDP 2.4 to HDP 2.5

Highlighted

Phoenix data migration from HDP 2.4 to HDP 2.5

New Contributor

I have to migrate phoenix data from on HDP cluster to another cluster. We are retiring old hardware and moving to a newer infra. Old cluster is on HDP 2.4.3 (Phoenix 4.4) and I am planning to setup HDP 2.5 on the new cluster. I can halt reads and writes to the old cluster. Couple of tables in phoenix has dynamic columns. What is the safest way to migrate data from old cluster to new cluster ?

3 REPLIES 3
Highlighted

Re: Phoenix data migration from HDP 2.4 to HDP 2.5

Super Collaborator

Usually you need just migrate your HBase tables (using copyTable utility for example). During the first run of sqlline all required steps for upgrade will happen automatically. A manual steps with running psql is required if you have conditions that are described in section Phoenix-4.5.0 Release Notes at

https://phoenix.apache.org/release_notes.html

Highlighted

Re: Phoenix data migration from HDP 2.4 to HDP 2.5

Depending on how large your tables are, you may have better luck using ExportSnapshot instead of CopyTable.

Highlighted

Re: Phoenix data migration from HDP 2.4 to HDP 2.5

New Contributor

I have successfully migrated all the tables with CopyTable expect the tables which have dynamic columns. I am getting the following exception when I execute CopyTable

Error: org.apache.hadoop.hbase.client.RetriesExhaustedWithDetailsException: Failed 5499 actions: org.apache.hadoop.hbase.regionserver.NoSuchColumnFamilyException: Column family L#0 does not exist in region ALL_ENTRIES,,1486727925943.43e0df1fed19043fea44f22388471c65. in table 'ALL_ENTRIES', {TABLE_ATTRIBUTES => {REGION_REPLICATION => '2', coprocessor$1 => '|org.apache.phoenix.coprocessor.ScanRegionObserver|805306366|', coprocessor$2 => '|org.apache.phoenix.coprocessor.UngroupedAggregateRegionObserver|805306366|', coprocessor$3 => '|org.apache.phoenix.coprocessor.GroupedAggregateRegionObserver|805306366|', coprocessor$4 => '|org.apache.phoenix.coprocessor.ServerCachingEndpointImpl|805306366|', coprocessor$5 => '|org.apache.phoenix.hbase.index.Indexer|805306366|org.apache.hadoop.hbase.index.codec.class=org.apache.phoenix.index.PhoenixIndexCodec,index.builder=org.apache.phoenix.index.PhoenixIndexBuilder'}, {NAME => '0', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'FAST_DIFF', TTL => 'FOREVER', COMPRESSION => 'SNAPPY', MIN_VERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.doBatchOp(RSRpcServices.java:724)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.doNonAtomicRegionMutation(RSRpcServices.java:679)
        at org.apache.hadoop.hbase.regionserver.RSRpcServices.multi(RSRpcServices.java:2056)
        at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:32303)
        at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:2127)
        at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:107)
        at org.apache.hadoop.hbase.ipc.RpcExecutor.consumerLoop(RpcExecutor.java:133)
        at org.apache.hadoop.hbase.ipc.RpcExecutor$1.run(RpcExecutor.java:108)
        at java.lang.Thread.run(Thread.java:745)
: 5499 times, 
        at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.makeException(AsyncProcess.java:228)
        at org.apache.hadoop.hbase.client.AsyncProcess$BatchErrors.access$1700(AsyncProcess.java:208)
        at org.apache.hadoop.hbase.client.AsyncProcess.waitForAllPreviousOpsAndReset(AsyncProcess.java:1700)
        at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:208)
        at org.apache.hadoop.hbase.client.BufferedMutatorImpl.doMutate(BufferedMutatorImpl.java:141)
        at org.apache.hadoop.hbase.client.BufferedMutatorImpl.mutate(BufferedMutatorImpl.java:98)
        at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:138)
        at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.write(TableOutputFormat.java:94)
        at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:658)
        at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
        at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
        at org.apache.hadoop.hbase.mapreduce.Import$Importer.processKV(Import.java:209)
        at org.apache.hadoop.hbase.mapreduce.Import$Importer.writeResult(Import.java:164)
        at org.apache.hadoop.hbase.mapreduce.Import$Importer.map(Import.java:149)
        at org.apache.hadoop.hbase.mapreduce.Import$Importer.map(Import.java:132)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1709)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Don't have an account?
Coming from Hortonworks? Activate your account here