Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

CDH 6.3.2 - Hbase2 Table problem

avatar
Explorer

I have inherited a problem.

 

48 regions in hbase:meta in transition.

 

Table has had data removed - most likely manually in an attempt to fix RIT issues. These RITs are probably a result of a network outage mid-operation.

 

Table is currently ENABLED and cannot be DISABLED (this has already been attempted by previous techie, which resulted in LOCKS/Procedures for DISABLE and DELETE as well as RITs).

 

Table is no longer required so can be deleted.  HDFS reported it as being only 6k so I removed the table directories and zapped the znodes via ZK shell.  This fixed the locks/procedures messages but CManager still reports 48 regions in transition and, as a result of this, balancing is not working.

 

What I need is a way to remove the rows from 'hbase:meta' as this is the only place where this table is still referenced.

 

Sample output:

 

alfa:rfilenameext column=table:state, timestamp=1604493139455, value=\x08\x00
alfa:rfilenameext,,1557760164826.35925292c25898671e5a894ce387e167. column=info:regioninfo, timestamp=1604388258225, value={ENCODED => 35925292c25898671e5a894ce387e167, NAME => 'alfa:rfilenameext,,1557760164826.35925292c25898671e5a894ce387e167.', STARTKEY => '', ENDKEY => '0'}
alfa:rfilenameext,,1557760164826.35925292c25898671e5a894ce387e167. column=info:seqnumDuringOpen, timestamp=1601269814633, value=\x00\x00\x00\x00\x00\x00\x008
alfa:rfilenameext,,1557760164826.35925292c25898671e5a894ce387e167. column=info:server, timestamp=1601269814633, value=ba-wtmp04.asgardalfa.hq.com:16020
alfa:rfilenameext,,1557760164826.35925292c25898671e5a894ce387e167. column=info:serverstartcode, timestamp=1601269814633, value=1601061167123
alfa:rfilenameext,,1557760164826.35925292c25898671e5a894ce387e167. column=info:sn, timestamp=1604388258050, value=ba-wtmp04.asgardalfa.hq.com,16020,1601061167123
alfa:rfilenameext,,1557760164826.35925292c25898671e5a894ce387e167. column=info:state, timestamp=1604388258225, value=CLOSED
alfa:rfilenameext,0,1557760164826.787d1455b84f2d846ce6089392f01fd2. column=info:regioninfo, timestamp=1600969938610, value={ENCODED => 787d1455b84f2d846ce6089392f01fd2, NAME => 'alfa:rfilenameext,0,1557760164826.787d1455b84f2d846ce6089392f01fd2.', STARTKEY => '0', ENDKEY => '1'}
alfa:rfilenameext,0,1557760164826.787d1455b84f2d846ce6089392f01fd2. column=info:seqnumDuringOpen, timestamp=1600780187722, value=\x00\x00\x00\x00\x00\x02^\xBB
alfa:rfilenameext,0,1557760164826.787d1455b84f2d846ce6089392f01fd2. column=info:server, timestamp=1600780187722, value=ba-wtmp08.asgardalfa.hq.com:16020
alfa:rfilenameext,0,1557760164826.787d1455b84f2d846ce6089392f01fd2. column=info:serverstartcode, timestamp=1600780187722, value=1600780162556
alfa:rfilenameext,0,1557760164826.787d1455b84f2d846ce6089392f01fd2. column=info:sn, timestamp=1600969938610, value=ba-wtmp07.asgardalfa.hq.com,16020,1600936054386
alfa:rfilenameext,0,1557760164826.787d1455b84f2d846ce6089392f01fd2. column=info:state, timestamp=1600969938610, value=OPENING
alfa:rfilenameext,1,1557760164826.aa9d89b40a9def31a080fdd1776acb4e. column=info:regioninfo, timestamp=1601060563980, value={ENCODED => aa9d89b40a9def31a080fdd1776acb4e, NAME => 'alfa:rfilenameext,1,1557760164826.aa9d89b40a9def31a080fdd1776acb4e.', STARTKEY => '1', ENDKEY => '2'}
alfa:rfilenameext,1,1557760164826.aa9d89b40a9def31a080fdd1776acb4e. column=info:seqnumDuringOpen, timestamp=1600780186976, value=\x00\x00\x00\x00\x00\x02^\xA9
alfa:rfilenameext,1,1557760164826.aa9d89b40a9def31a080fdd1776acb4e. column=info:server, timestamp=1600780186976, value=dr1-wtmp02.asgardalfa.hq.com:16020
alfa:rfilenameext,1,1557760164826.aa9d89b40a9def31a080fdd1776acb4e. column=info:serverstartcode, timestamp=1600780186976, value=1600780163021
alfa:rfilenameext,1,1557760164826.aa9d89b40a9def31a080fdd1776acb4e. column=info:sn, timestamp=1601060563980, value=ba-wtmp05.asgardalfa.hq.com,16020,1601049145467

 

I have been scanning various sources but these have not been very clear or relevant.  For this problem I just want to remove all references (rows) which are related to alfa:rfilenameext table from the hbase:meta table.  Whichever way this happens is of no importance.

 

However, there are other tables in existence on this cluster which are needed so I am not sure about a rebuild of the entire meta table.

 

Apologies in advance...I am a complete hbase newbie and was hoping there was a command such as:

 

delete 'alfa:rfilenameext' from 'hbase:meta'

 

which might serve to remove all rows for that table.

1 ACCEPTED SOLUTION

avatar
Super Collaborator

Hello @TGH 

 

Thanks for the response. To your queries,

 

(I) HBCK2 has extraRegionsInMeta for removing Regions from HBase:Meta, which doesn't have any HDFS Directories. Running the HBCK2 Tool with the concerned command shows the Regions in Meta, which aren't present in HDFS & adding a Fix flag (-f) remove them as well. 

(II) Using Delete Command on HBase:Meta isn't an issue, yet we generally avoid making any changes to the HBase:Meta manually. It's more of a recommendation to avoid any manual oversight causing HBase:Meta corruption.

(III) We can change the Region State via HBCK2 setRegionState Command. Note that the HBCK2 Git Page recommend using the Command as a last resort, considering the risky nature. If Customer is aware of the risk associated with the concerned Command, they can run the Command to set the TableState or RegionState.

 

- Smarak

View solution in original post

12 REPLIES 12

avatar
Super Collaborator

@TGH 

 

No worries, if you are planning to drop the Table anyhow, Let's use the following approaches:

1. You (Your previous Team) have deleted the Table Level Directories from HDFS. 

2. In HBase:Meta, We have 1 Row per Table Region & 1 Row for Table as well. 

3. Use "get 'hbase:meta','<RegionID RowKey>'". Note that your Team can get the Scan Output to check the format of the RowKey for the concerned Table, which includes the RegionID. 

4. After confirming the Output from "get" Command, Use the "deleteall" Command with the same Argument to remove the Rows of Table's Region. Finally, Remove the Table Level Info as well. 

5. Restart the HBase Service to clear the Cache as well. 

 

Recommend to test the above on 1 Sample Table (Create Table > Remove Region Directory > Remove Meta Info > Restart HBase > Check HBCK Output). 

 

- Smarak

avatar
Explorer

OK. I had a meandering circuit to get a version of operator-tools built and running.

 

After running it on the meta table -

sudo hbase hbck -j /tmp/hbase-hbck2-1.1.0-SNAPSHOT.jar extraRegionsInMeta alfa:rfilenameext --fix

 

I have ended up with:-

 

alfa:extgen,F,1608197544264.ba22f6113a0bb9520cee1f7b30050fa7. column=info:state, timestamp=1608197544904, value=OPEN
alfa:rfilenameext column=table:state, timestamp=1604493139455, value=\x08\x00
alfa:rfiles column=table:state, timestamp=1602600776355, value=\x08\x00

 

I still have one row referring to the missing table.  Do I need to restart hbase service to remove this or will it vanish at some time?    HBASE on the master still shows the 48 regions in transition when I open the interface.  I assume this is because the service has not been restarted.

 

avatar
Super Collaborator

Hello @TGH 

 

Yes, After doing any HBCK2 Changes, Restart the Service as the Components have a Cached Version of the Metadata as well. Let us know how things goes. 

 

- Smarak