Member since
09-03-2020
258
Posts
7
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
492 | 04-09-2024 05:59 AM | |
738 | 04-06-2024 12:35 AM | |
753 | 03-21-2024 07:58 AM | |
1231 | 03-04-2024 06:04 AM | |
2268 | 02-27-2024 12:29 AM |
05-01-2024
03:59 AM
@kpalanisamy ➤ We also have a alternate hbase shell native approach through which we can determine the RegionName and RegionServer from the rowkey $ locate_region 'namespace:tablename','rowkey' HOST REGION Regionserver-name:16 {ENCODED => regionName, NAME => 'namespace:tablename,rowkey.regionName.', STARTKEY => 'f0046', ENDKEY => 'f0245cf'} 1 row(s) Took 0.6760 seconds => #<Java::OrgApacheHadoopHbase::HRegionLocation:0x4070c4ff>
... View more
04-09-2024
05:59 AM
✥ In CDH6 HBase, the property was removed per https://issues.apache.org/jira/browse/HBASE-15989 because we allow all altering operations without disabling table. cc : @webtube
... View more
04-06-2024
12:35 AM
1 Kudo
✥ For the Error "Unrecognized option:-j" kindly manually type the keyword -j instead of copy-paste => Note: you generally notice such exceptions when you have special character pasted ✥ Kindly make sure you first bypass the stuck procedure and locks $ hbase hbck -j /tmp/target/hbase-hbck2-1.3.0-SNAPSHOT.jar bypass -o -r <pid> => For reviewing the stuck procedure kindly navigate to Hmaster ui => Procedure & locks and first bypass the proc id visible in lock section ✥ The you can consider closing the region State before disabling table $ hbase hbck -j /tmp/target/hbase-hbck2-1.3.0-SNAPSHOT.jar setRegionState $i CLOSED $ hbase hbck -j /tmp/target/hbase-hbck2-1.3.0-SNAPSHOT.jar setTableState <tablename> DISABLED => Note: Make sure the 3k regions are all part of the same table " " which you wish to disable/remove ✥ Once table is disabled then you can login into hbase shell and then perform drop table operation $ drop 'tablename'
... View more
03-21-2024
07:58 AM
edits file: An edits file contains a log of all transactions after the most recent fsimage file and contains the transactions of the file system changes (like create file, delete file, permissions change etc) The Checkpointing process will then periodically merge the content of the most recent fsimage with the edits (containing new transactions) to create a new fsimage. Although the edits log file are redundant after they are merged in fsimage, they are kept for safety/potential recovery requirement reasons and is part of the regular design. This should be finite by default however. The two configuration parameters to control this are: a. "dfs.namenode.num.extra.edits.retained" (default 1000000) : This determines many transactions to keep, regardless of how many edit files they are spread across. b. "dfs.namenode.max.extra.edits.segments.retained" (default 10000). This serves as a secondary cap for the former. This means that around 10000 extra files would be kept at all times, as long as those 10000 files keep about 1 million edits (per above configuration parameter). On a healthy, periodic checkpointing cluster, each edit file should not be higher than ~2-5MB, and thus the overall space footprint of keeping these edits around is never high to cause any concern. Also situation has never occurred where these defaults values needed to be lowered Unnecessary edits (those beyond the retain number configurations) are only purged upon each successful checkpointing at the active namenode, which purges the local NameNode edits files and asks the Journal Nodes to purge their edits file. So if a checkpoint is not occurring, that can cause edits file to be not purged. These two properties are not commonly changed and therefore are not exposed as separate properties within Cloudera Manager, hence they will need to be added in the NameNode Safety Valve ("NameNode Configuration Safety Valve for hdfs-site.xml") and requires Namenode restart. <property> <name>dfs.namenode.num.extra.edits.retained</name> <value>1000000</value> </property> <property> <name>dfs.namenode.max.extra.edits.segments.retained</name> <value>10000</value> </property>
... View more
03-04-2024
06:04 AM
@josr89 seems like you have permission issues for below path hdfs://env1/apps/hbase/data/staging Kindly assign proper permission for the same path for userA from the Ranger UI
... View more
02-27-2024
12:29 AM
2 Kudos
Yes @mike_bronson7 above steps also works
... View more
02-05-2024
05:21 AM
1 Kudo
=> If above steps still gives you issues then you can simply execute step 5 or below Cmd from Standby NN // Bootstrap Standby NameNode. This command copies the contents of the Active NameNode's metadata directories (including the namespace information and most recent checkpoint) to the Standby NameNode. # hdfs namenode -bootstrapStandby Note: Step 1 to step 3 is process of creating new fsimage but if your Active NN is already up and running then I would directly login in to Standby and then perform bootstrapStandby operation
... View more
02-04-2024
10:14 AM
1 Kudo
Kindly check if the new Datanode and existing DN node part of same rack Share below command output 1. HDFS dfsadmin -report 2. HDFS dfsadmin -printTopology
... View more
02-04-2024
10:00 AM
1 Kudo
Approach you mentioned involves further downtime If your active NN is up and running then you can simply copy the latest fsimage from active NN data dir path to Standby NN data dir path and then try to start the standby NN once again
... View more
10-13-2021
10:02 AM
1 Kudo
Hi Rahul @rahuledavalath you can to refer below link for performing phoenix tables migration from HDP to CDP https://community.cloudera.com/t5/Community-Articles/Phoenix-tables-migration-from-HDP-to-CDP/ta-p/323933 If the answer helps kindly accept as solution and click thumbs up button. Regards, Naveen S
... View more