Member since
10-03-2020
235
Posts
15
Kudos Received
18
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
482 | 11-11-2024 09:31 AM | |
1338 | 08-28-2023 02:13 AM | |
1874 | 12-15-2021 05:26 PM | |
1714 | 10-22-2021 10:09 AM | |
4851 | 10-20-2021 08:44 AM |
09-13-2021
12:27 PM
Thanks @willx for sharing a nice idea. I will setup a real cluster and follow your steps.
... View more
09-13-2021
08:49 AM
Actually, this should work fine regardless. We'll monitor what NameNodeHealth would monitor ourselves and just suppress that health test. Thanks Will!
... View more
09-13-2021
05:42 AM
@shean Have you resolved your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. If not, can you provide further details?
... View more
09-12-2021
10:59 PM
1 Kudo
Introduction
Phoenix is a popular solution to provide OLTP and operational analytics on top of HBase for low latency. Hortonworks Data Platform (HDP), Cloudera Data Platform (CDP) are the most popular platforms for Phoenix to interact with HBase.
Nowadays, many customers choose to migrate to Cloudera Data Platform to better manage their Hadoop clusters and implement the latest solutions in big data.
This article discussed how to migrate Phoenix data/index tables to the newer version CDP Private Cloud Base.
Environment
Source cluster HDP 2.6.5 , HDP 3.1.5
Target cluster CDP PvC 7.1.5, CDP PvC 7.1.6, CDP PvC 7.1.7
Migration steps
The SYSTEM table will be automatically created when Phoenix-sqlline initially starts. It will contain the metadata of Phoenix tables. In order to show Phoenix data/index tables in the target cluster, we need to migrate SYSTEM tables from the source cluster as well.
Stop Phoenix service on the CDP cluster You can stop the service on Cloudera Manager > Services > Phoenix Service > Stop
Drop the system.% tables on CDP cluster (from HBase) In HBase shell, drop all the SYSTEM tables. hbase:006:0> disable_all "SYSTEM.*"
hbase:006:0> drop_all "SYSTEM.*"
Copy the system, data, and index tables to the CDP cluster There are many methods to copy data between HBase clusters. I would recommend using snapshots to keep the schema same. Source HBase:
Take snapshots of all SYSTEM tables and data tables hbase(main):020:0> snapshot "SYSTEM.CATALOG","CATALOG_snap"
ExportSnapshot to the target cluster sudo -u hdfs hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot CATALOG_snap -copy-to hdfs://Target_Active_NameNode:8020/hbase -mappers 16 -bandwidth 200 Your HBase directory path may be different. Check HBase configuration in Cloudera Manager for the path.
In the Target cluster, the owner may become a different user who triggers MapReduce. So, we need to change the owner back to default hbase:hbase sudo -u hdfs hdfs dfs -chown -R hbase:hbase /hbase
In HBase shell, use clone_snapshot to create new tables clone_snapshot "CATALOG_snap","SYSTEM.CATALOG" When you complete the above steps, you should have all the SYSTEM tables and data tables, and index tables in your target HBase. For example, the following is copied from HDP2.6.5 cluster and created in CDP. hbase:013:0> list
TABLE
SYSTEM.CATALOG
SYSTEM.FUNCTION
SYSTEM.SEQUENCE
SYSTEM.STATS
TEST
Start Phoenix service, enter phoenix-sqlline, and then check if you can query the table.
(Optional) If HDP already enabled NamespaceMapping, we should also set isNamespaceMappingEnabled to true on the CDP cluster in both client/service hbase-site.xml, and restart the Phoenix service.
Known Bug of Migration Process
Starting from Phoenix 5.1.0/ CDP 7.1.6, there is a bug during SYSTEM tables auto-upgrade. The fix will be included in the future CDP release. The customer should raise cases with Cloudera support and apply a hotfix for this bug on top of CDP 7.1.6/ 7.1.7.
Refer to PHOENIX-6534
Disclaimer
This article does not contain all the versions of HDP and CDP, and also does not test all the situations. It only chooses the popular or latest versions. If you followed steps but failed or met with a new issue, please feel free to ask in the Community or raise a case with Cloudera support.
... View more
09-12-2021
10:58 PM
@npr20202, Have you resolved your issue? If so, can you please provide the solution here? It will help others who are seeking answers to similar queries. If you are still experiencing the issue, can you provide the information @willx has requested?
... View more
09-10-2021
10:48 PM
Hi @Ben621 , Please check this community post should answer your question. https://community.cloudera.com/t5/Support-Questions/How-are-the-primary-keys-in-Phoenix-are-converted-as-row/td-p/147232 Regards, Will If the answer helps, please accept as solution and click thumbs up.
... View more
09-10-2021
07:07 AM
Hi @ighack, If you mean the current RS heap is 50 megabytes or 80 megabytes, it's usually not enough. A good number is 16GB ~ 31 GB for most cases. If you indeed don't have enough resource in RS nodes, at lease keep RS heap 4GB as default, if still see many long GC pauses you have to increase it. Refer to below link to install phoenix and validate installation. https://docs.cloudera.com/documentation/enterprise/latest/topics/phoenix_installation.html#concept_ofv_k4n_c3b If you installed as above steps, then in any of the CDH node find the JDBC jar: find / -name "phoenix-*client.jar" and follow this guide: https://docs.cloudera.com/runtime/7.2.10/phoenix-access-data/topics/phoenix-orchestrating-sql.html Check your JDBC URL syntax should looks like: jdbc:phoenix:zookeeper_quorum:zookeeper_port:zookeeper_hbase_path Regards, Will
... View more
08-19-2021
04:55 AM
@roshanbim, Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
08-18-2021
07:53 PM
Hi @rizkymalm The answer is yes, but need to follow the correct steps. Please refer to below documents for detailed instructions of backing up hdfs meta: https://docs.cloudera.com/runtime/7.2.10/data-protection/topics/hdfs-back-up-hdfs-metadata.html If the answer helps please accept as solution and click thumbs up button. Regards, Will
... View more
08-18-2021
09:57 AM
Thanks @willx for you support!
... View more
- « Previous
- Next »