Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

HBase repair deleted all my tables beside one

avatar
Champion Alumni

Hello,

 

I had a problem, my job failed because hbase could not find an existing table.

 

I then did a:

 

sudo -u hbase hbase hbck -repair

 

and now all my tables are gone (beside one)!!

I cannot see my old data in the hbase folder! Is there a way to recover all this?

 

Please help!

 

Thank you!

 

GHERMAN Alina
1 ACCEPTED SOLUTION

avatar
Master Collaborator

So this is 13 days late, so I'm imagining you already have a solution or have moved on. Commenting for future searchers. 

 

As you have learned the hard way, the only "safe" hbck option is the -fixAssignments option. every other option is potentially dangerous. Having said that, I've experienced -fixAssignments multiply assign regions if there are regions in transition. this can be fixed by running -fixAssignments again, or by failing over the HMaster and allowing the assignment manager to fix it. 

 

Unfortunately there isn't a single way that the -repair option could have trashed things. There are 3 "brains" for Hbase, there is the data in HDFS under /hbase for each region in the .regioninfo file, the zookeeper data, and the Hbase .META. region.  Depending on which one having a problem will determine which option is the proper one to get fixed. When having an issue, The first step is to figure out which one of these is incorrect. This is a deep topic so i've added some helpful links to read more on this[1][2]

 

Based on all of your tables disappearing, but presumably being able to read it before I would assume that There were holes detected for basically all tables, and detected from a previous region state with different splits. hbase decided to fix it by filling those "holes" with empty regions, deleting the existing regions. 

 

To recover from that, You would shut down hbase, move the table files from .Trash in hdfs back to the original location and do an offline meta repair. hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair. From there you bring up hbase and troubleshoot any issues from there.  Again, This would only be proper based on the assumption above being correct, and would absolutely not be the proper action if HDFS were corrupt and it was Meta that was correct. 

 

 

[1]http://hbase.apache.org/book.html#_region_overlap_repairs
[2]http://www.cloudera.com/documentation/enterprise/5-4-x/topics/admin_hbck_poller.html

 

View solution in original post

2 REPLIES 2

avatar
Master Collaborator

So this is 13 days late, so I'm imagining you already have a solution or have moved on. Commenting for future searchers. 

 

As you have learned the hard way, the only "safe" hbck option is the -fixAssignments option. every other option is potentially dangerous. Having said that, I've experienced -fixAssignments multiply assign regions if there are regions in transition. this can be fixed by running -fixAssignments again, or by failing over the HMaster and allowing the assignment manager to fix it. 

 

Unfortunately there isn't a single way that the -repair option could have trashed things. There are 3 "brains" for Hbase, there is the data in HDFS under /hbase for each region in the .regioninfo file, the zookeeper data, and the Hbase .META. region.  Depending on which one having a problem will determine which option is the proper one to get fixed. When having an issue, The first step is to figure out which one of these is incorrect. This is a deep topic so i've added some helpful links to read more on this[1][2]

 

Based on all of your tables disappearing, but presumably being able to read it before I would assume that There were holes detected for basically all tables, and detected from a previous region state with different splits. hbase decided to fix it by filling those "holes" with empty regions, deleting the existing regions. 

 

To recover from that, You would shut down hbase, move the table files from .Trash in hdfs back to the original location and do an offline meta repair. hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair. From there you bring up hbase and troubleshoot any issues from there.  Again, This would only be proper based on the assumption above being correct, and would absolutely not be the proper action if HDFS were corrupt and it was Meta that was correct. 

 

 

[1]http://hbase.apache.org/book.html#_region_overlap_repairs
[2]http://www.cloudera.com/documentation/enterprise/5-4-x/topics/admin_hbck_poller.html

 

avatar
Champion Alumni
Thank you!

Indeed, I recreated all the tables... since I have the trash disabled, I had nothing in trash...

However, this is a very complete reply. Thank you!
GHERMAN Alina