Member since
12-16-2015
267
Posts
8
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3028 | 09-01-2020 04:15 AM |
11-21-2023
12:33 AM
There is an easy way to check Meta Region Server info from HMaster webUI > System Tables > hbase:meta. However, anyone can check the same from the command line, or the HMaster UI might not be accessible due to security restrictions. This article explains how to find the Meta Region Server info from the command line.
Option 01:
The status 'detailed' shell command can be useful since it has all the hosted region’s info per RS separately along with other useful info.
#hbase shell
hbase(main):033:0> status 'detailed'
version 2.1.0-cdh6.3.x-SNAPSHOT
0 regionsInTransition
active master: cdh-63x-ie-3.cdh-63x-ie.root.hwx.site:22001 1699505011260
0 backup masters
master coprocessors: [AccessController, MasterAuditCoProcessor]
3 live servers
cdh-63x-ie-1.cdh-63x-ie.root.hwx.site:22101 1699505010867
requestsPerSecond=0.0, numberOfOnlineRegions=1, usedHeapMB=894, maxHeapMB=31552, numberOfStores=3, numberOfStorefiles=3, storefileUncompressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeKB=0, readRequestsCount=2344, filteredReadRequestsCount=0, writeRequestsCount=18, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=36, currentCompactedKVs=36, compactionProgressPct=1.0, coprocessors=[AccessController, MultiRowMutationEndpoint, RegionAuditCoProcessor, SecureBulkLoadEndpoint, TokenProvider]
"hbase:meta,,1"
numberOfStores=3, numberOfStorefiles=3, storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=1699505025619, storefileSizeMB=0, memstoreSizeMB=0, readRequestsCount=2344, writeRequestsCount=18, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=36, currentCompactedKVs=36, compactionProgressPct=1.0, completeSequenceId=53, dataLocality=1.0
cdh-63x-ie-2.cdh-63x-ie.root.hwx.site:22101 1699505010821
requestsPerSecond=0.0, numberOfOnlineRegions=2, usedHeapMB=119, maxHeapMB=31552, numberOfStores=2, numberOfStorefiles=2, storefileUncompressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeKB=0, readRequestsCount=0, filteredReadRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, coprocessors=[AccessController, RegionAuditCoProcessor, SecureBulkLoadEndpoint, TokenProvider]
"hbase:namespace,,1699502887290.85900ae7561b85446e7032426c667be7."
numberOfStores=1, numberOfStorefiles=1, storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, storefileSizeMB=0, memstoreSizeMB=0, readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, completeSequenceId=-1, dataLocality=1.0
"t1,,1699542084016.ed188627605fbd163114cafa12ebe74c."
numberOfStores=1, numberOfStorefiles=1, storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, storefileSizeMB=0, memstoreSizeMB=0, readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, completeSequenceId=-1, dataLocality=0.0
cdh-63x-ie-4.cdh-63x-ie.root.hwx.site:22101 1699505007650
requestsPerSecond=0.0, numberOfOnlineRegions=2, usedHeapMB=2053, maxHeapMB=31386, numberOfStores=2, numberOfStorefiles=2, storefileUncompressedSizeMB=0, storefileSizeMB=0, memstoreSizeMB=0, storefileIndexSizeKB=0, readRequestsCount=2, filteredReadRequestsCount=0, writeRequestsCount=2, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, coprocessors=[AccessController, RegionAuditCoProcessor, SecureBulkLoadEndpoint, TokenProvider]
"hbase:acl,,1699502889909.58d1b4a8e4007207266fb294ecc5c7f2."
numberOfStores=1, numberOfStorefiles=1, storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, storefileSizeMB=0, memstoreSizeMB=0, readRequestsCount=2, writeRequestsCount=2, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, completeSequenceId=-1, dataLocality=0.0
"t2,,1699542101345.84cb672416b9bf2eac1a8ea8ce5879a3."
numberOfStores=1, numberOfStorefiles=1, storefileUncompressedSizeMB=0, lastMajorCompactionTimestamp=0, storefileSizeMB=0, memstoreSizeMB=0, readRequestsCount=0, writeRequestsCount=0, rootIndexSizeKB=0, totalStaticIndexSizeKB=0, totalStaticBloomSizeKB=0, totalCompactingKVs=0, currentCompactedKVs=0, compactionProgressPct=NaN, completeSequenceId=-1, dataLocality=0.0
0 dead servers
Took 0.0111 seconds
Option 02:
#hbase zkcli
[zk:2181(CONNECTED) 1] get /hbase/meta-region-server
?master:22001O?????YPBUF2
%cdh-63x-ie-1.cdh-63x-ie.root.hwx.site???????1
Note: There are some dirty characters [that’s because it is a protobuf message] along with the RS hostname.
OR:
#zkCli.sh -server myzoo get /hbase/meta-region-server
Best Way:
locate_region : Locate the region given a table name and a row-key
SYNTAX:- hbase> locate_region ‘tableName’, ‘key0’
And since meta don’t have any rowkey, passing empty value works also.
hbase(main):003:0* locate_region 'hbase:meta', ''
HOST REGION
cdh-63x-ie-1.cdh-63x-ie.root.hwx.site:22101 {ENCODED => 1588230740, NAME => 'hbase:meta,,1', STARTKEY => '', ENDKEY => ''}
1 row(s)
Took 4.2780 seconds
... View more
Labels:
07-06-2021
10:26 AM
@diplompils It is not necessary that file is lost if you are getting the output as false for recoverLease command. Usually file can't be deleted until it has lease acquired and not explicitly deleted using rm command. You can try below- hdfs debug recoverLease -path <file> -retries 10 Or you may check - https://issues.apache.org/jira/browse/HDFS-8576
... View more
05-12-2021
03:24 AM
1 Kudo
@PrernaU Ambari 2.7.5 is only fully compatible with HDP 3.1.5, while other HDP 3.1.x or 3.0.x versions are partially compatible for upgrade only. Please see.
... View more
05-12-2021
03:22 AM
1 Kudo
@fpaezalban Ambari 2.7.5 is only fully compatible with HDP 3.1.5, while other HDP 3.1.x or 3.0.x versions are partially compatible for upgrade only. Please see.
... View more
05-12-2021
03:20 AM
1 Kudo
@Jans What is the exact HDP version? Ambari 2.7.5 is only fully compatible with HDP 3.1.5, while other HDP 3.1.x version are compatible only for upgrade.
... View more
05-10-2021
03:22 AM
@PrernaU I hope you haven't changed any config in HDFS. Please compare pre & Post upgrade configs using ambari UI. Considering the exception below, I have seen this issue once due to memory issue on a single DataNode. ERROR: Cannot set priority of datanode process 45359 i.e. The available memory was 12GB while the DN heap was set to 16GB & due to this the DN JVM was failing to start. /proc/meminfo:
MemTotal: 131407744 kB
MemFree: 2180792 kB
MemAvailable: 12004080 kB You can try checking which process is using RAM by running below command and try to reduce the RAM utilization and start the DN process #ps aux --sort -rss There could be something else but probably some host level crunch is causing the JVM to not get started properly. Please check DN.out and DN.log file once for more details.
... View more
09-01-2020
08:40 AM
There are several options available to achieve this use case. The easiest and the best approach would be HBase snapshots method to transfer the data.
Note: All actions need to be performed as the HBase user only, to ensure correct permissions.
On source cluster: #hbase shell> snapshot ‘Test’_Table’, ‘Test_Table_SS’
#hbase org.apache.hadoop.hbase.snapshot.SnapshotInfo -snapshot ‘Test_Table_SS’ -files -stats
#hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot 'Test_Table_SS' -copy-to hdfs://<Destination_NN_hostname>:8020/hbase -mappers 16 -bandwidth 200
On destination cluster: #hbase org.apache.hadoop.hbase.snapshot.SnapshotInfo -snapshot ‘Test_Table_SS’ -files -stats
#hbase shell> clone_snapshot ‘Test_Table_SS’, ‘Test_Table’
#hbase shell> major_compact 'Test_Table'
Once done, you can choose to delete the snapshots on both ms05 and as05: #hbase shell> delete_snapshot 'Test_Table_SS'
But If you plan to use CopyTable, this will not work without additional configuration.
Communication between an older client and a newer server is not guaranteed, there is currently a workaround to allow this to work by adding the following property to your client configuration:
On the client being used to launch the CopyTable, you can do either:
Command Line:
-Dhbase.meta.replicas.use=true
OR
hbase-site.xml:
hbase.meta.replicas.use
true
... View more
09-01-2020
04:15 AM
2 Kudos
There are several options available for this. If you plan to use CopyTable, this will not work without additional configuration. Communication between an older client and newer server is not guaranteed, there is currently a workaround to allow this to work by adding the following property to your client configuration. On the client being used to launch the CopyTable, you can do either: Command Line: -Dhbase.meta.replicas.use=true OR hbase-site.xml: hbase.meta.replicas.use true Your other option would be to use snapshots to transfer the data. Note- If this helps, please don't forget to click on "Accept as Solution".
... View more
03-27-2020
10:12 AM
@dineshc Please specify where the mentioned workaround property needs to be added. Ams-site.xml or Ams-Hbase-site.xml?
... View more
02-06-2020
09:12 AM
@josh_nicholson NOTE: For Kerberized Cluster use the value of "zookeeper.znode.parent" may be "/ams-hbase-secure" so we can connect to it as following: /usr/hdp/2.5.0.0-1245/phoenix/bin/sqlline.py c6403.ambari.apache.org:61181:/ams-hbase-secure
... View more