Member since
07-19-2020
102
Posts
14
Kudos Received
8
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
199 | 10-22-2024 05:23 AM | |
319 | 10-11-2024 04:28 AM | |
685 | 09-24-2024 10:15 PM | |
475 | 09-17-2024 01:07 PM | |
611 | 09-09-2024 03:02 AM |
10-22-2024
05:23 AM
Hi @snm1523 It is a CM restriction to change the dfs.datanode.balance.bandwidthPerSec more than 1 GB but can be achieved from CLI Example - nohup hdfs balancer -Ddfs.datanode.balance.bandwidthPerSec=4294967296 > /tmp/balance_log.out &
... View more
10-11-2024
04:28 AM
1 Kudo
Hi @manyquestions Use the below statement to insert and let us know if you are facing the issue - UPSERT INTO A_TEST_1728428788967 (ID, NAME) VALUES (123, 'A test for a_test_1728428788967'); Values for integer should not be with quotes. Was your question answered? Please take some time to click on "Accept as Solution" -- If you find a reply useful, say thanks by clicking on the thumbs up button below this post.
... View more
09-24-2024
10:15 PM
2 Kudos
Hi @Amandi You have to add - export HBASE_MANAGES_ZK=false if you want to manage your own zookeeper and run hbase in distributed mode. If you do not run hbase in distributed mode, hbase will start its own zookeeper. Error indicates that zookeeper do not have the znodes /hbase/master created in zookeeper "ERROR: KeeperErrorCode = NoNode for /hbase/master"
... View more
09-18-2024
12:27 PM
Hi @Amandi We need to check the Job logs in order to find out why it is failing - http://dc1-apache-hbase.mobitel.lk:8088/proxy/application_1725184331906_0018/ Also, we can check Resource Manager logs to check if there is any issue with permissions or launching containers.
... View more
09-18-2024
12:14 PM
Hi @csguna We can check below for read replica properties which can be configured - https://docs.cloudera.com/cdp-private-cloud-base/7.1.9/hbase-high-availability/topics/hbase-read-replica-properties.html Compaction would be carried out for primary region according to design. Flushes, compactions and bulk loads can change the list of files. This means that the secondaries will need to discover the file changes, but until all secondaries are in sync, the old store files should not be deleted.
... View more
09-18-2024
11:48 AM
Hi @therealsrikanth Manual snapshots copying is the only way forward as you have security and compatibility issues between the clusters. Kindly try to follow below steps - Create Snapshot Operation Take the snapshot in CDH. For example after login from HBase shell: $ hbase shell hbase> snapshot '<TABLE_NAME>', '<SNAPSHOT_NAME>' Major compact the table $ major_compact '<TABLE_NAME>' Copy the files to the local environment from the below locations: hdfs dfs -get /hbase/data/.hbase-snapshot/ /tmp/dir hdfs dfs -get /hbase/data/archive/ /tmp/dir2 Restore operation Transfer the files to the CDP environment. Use -copyFromLocal operation to copy the contents to HDFS: cd /tmp hdfs dfs -copyFromLocal dir /hbase/data/.hbase-snapshot hdfs dfs -copyFromLocal dir2 /hbase/archive/data/default Note: "default" is a namespace name on which newly created tables are placed if you don't specify a custom namespace. Make sure, the directories are created in HDFS. the path should look like this after copying: /hbase/archive/data/<Namespace>/<TABLE_NAME>/<hfile1> /hbase/archive/data/<Namespace>/<TABLE_NAME>/<hfile2> ... Check permissions on /hbase/archive directory, it should be owned by user HBase. Login to the HBase shell and check the snapshots: Hbase shell hbase:001:0> list_snapshots When the snapshot is visible, you can use clone_snapshot command to create a new table using the snapshot: hbase> clone_snapshot '<SNAPSHOT_NAME>', '<TABLE_NAME_NEW>' Was your question answered? Please take some time to click on "Accept as Solution" -- If you find a reply useful, say thanks by clicking on the thumbs up button below this post.
... View more
09-17-2024
01:07 PM
1 Kudo
Hi @MaraWang The error indicates you are using ranger to manage the permissions on hbase. Try to open the ranger WebUi and add the permissions for user under hbase policy. Also, verify if the user 'txw387' exists in your cluster Ref- https://docs.cloudera.com/runtime/7.2.18/security-ranger-authorization/topics/security-ranger-resource-policy-configure-hbase.html
... View more
09-11-2024
04:25 AM
1 Kudo
Hi @Amandi The failure seems to be when launching containers in the pre-launch stage. Since the ImportTsv job uses MR and yarn as underlying services for running jobs, could you please confirm that a simple MR pi job from a yarn gateway node is able to run without any issues: # hadoop jar /opt/cloudera/parcels/CDH/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar pi 10 100
... View more
09-09-2024
03:02 AM
2 Kudos
Hi @Amandi Check below blog to know how to use bulk loading in hbase with examples - https://blog.cloudera.com/how-to-use-hbase-bulk-loading-and-why/ Was your question answered? Please take some time to click on "Accept as Solution" -- If you find a reply useful, say thanks by clicking on the thumbs up button below this post.
... View more
09-05-2024
03:04 PM
1 Kudo
Hi @HadoopCommunity Was your question answered? Please take some time to click on "Accept as Solution" -- If you find a reply useful, say thanks by clicking on the thumbs up button below this post.
... View more