Member since
07-19-2020
102
Posts
14
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
195 | 10-22-2024 05:23 AM | |
315 | 10-11-2024 04:28 AM | |
679 | 09-24-2024 10:15 PM | |
469 | 09-17-2024 01:07 PM | |
604 | 09-09-2024 03:02 AM |
10-14-2024
02:53 PM
1 Kudo
@manyquestions Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
10-04-2024
04:47 AM
@MaraWang
Have you been able to resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more
10-01-2024
12:24 PM
@Amandi Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
09-29-2024
10:52 PM
1 Kudo
@Amandi, Did the response help resolve your query? If it did, kindly mark the relevant reply as the solution, as it will aid others in locating the answer more easily in the future.
... View more
09-18-2024
10:53 PM
1 Kudo
Hi Thanks for the link however my question is Read replica is performing compaction , any reason why it is so ? where should we look into to fix this issue? Since replication is done by readonly cluster it is deleting and causing FileNotFoundException
... View more
09-18-2024
11:48 AM
Hi @therealsrikanth Manual snapshots copying is the only way forward as you have security and compatibility issues between the clusters. Kindly try to follow below steps - Create Snapshot Operation Take the snapshot in CDH. For example after login from HBase shell: $ hbase shell hbase> snapshot '<TABLE_NAME>', '<SNAPSHOT_NAME>' Major compact the table $ major_compact '<TABLE_NAME>' Copy the files to the local environment from the below locations: hdfs dfs -get /hbase/data/.hbase-snapshot/ /tmp/dir hdfs dfs -get /hbase/data/archive/ /tmp/dir2 Restore operation Transfer the files to the CDP environment. Use -copyFromLocal operation to copy the contents to HDFS: cd /tmp hdfs dfs -copyFromLocal dir /hbase/data/.hbase-snapshot hdfs dfs -copyFromLocal dir2 /hbase/archive/data/default Note: "default" is a namespace name on which newly created tables are placed if you don't specify a custom namespace. Make sure, the directories are created in HDFS. the path should look like this after copying: /hbase/archive/data/<Namespace>/<TABLE_NAME>/<hfile1> /hbase/archive/data/<Namespace>/<TABLE_NAME>/<hfile2> ... Check permissions on /hbase/archive directory, it should be owned by user HBase. Login to the HBase shell and check the snapshots: Hbase shell hbase:001:0> list_snapshots When the snapshot is visible, you can use clone_snapshot command to create a new table using the snapshot: hbase> clone_snapshot '<SNAPSHOT_NAME>', '<TABLE_NAME_NEW>' Was your question answered? Please take some time to click on "Accept as Solution" -- If you find a reply useful, say thanks by clicking on the thumbs up button below this post.
... View more
09-11-2024
06:15 AM
@JSSSS Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
09-05-2024
03:04 PM
1 Kudo
Hi @HadoopCommunity Was your question answered? Please take some time to click on "Accept as Solution" -- If you find a reply useful, say thanks by clicking on the thumbs up button below this post.
... View more
08-23-2024
07:02 AM
1 Kudo
Hi @Marks_08 It could be an issue with the the beeline shell not able to include all the required configuration files needed for authentication. Could you please try with exporting the configuration manually and then launching the beeline shell. For example - export HADOOP_CONF_DIR=/etc/hadoop/conf:/etc/hive/conf:/etc/hbase/conf
... View more