Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

HBase - Region in Transition

avatar
Explorer

HBase keep having region in transition: 

 

Regions in Transition

Region State RIT time (ms)

1588230740hbase:meta,,1.1588230740 state=FAILED_OPEN, ts=Thu Apr 23 12:15:49 ICT 2015 (8924s ago), server=02slave.mabu.com,60020,14297655798238924009
Total number of Regions in Transition for more than 60000 milliseconds1 
Total number of Regions in Transition1

 

I've try "sudo -u hbase hbase hbck -repair" and also "unassign 'hbase:meta,,1.1588230740'" but still can't fix the problem.

1 ACCEPTED SOLUTION

avatar
Community Manager

1. Stop HBase
2. Move your original /hbase back into place
3. Use a zookeeper cli such as "hbase zkcli"[1] and run "rmr /hbase" to delete the HBase znodes
4. Restart HBase. It will recreate the znodes

 

If Hbase fails to start after this, you can always try the offline Meta repair:
hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair

 

Also check for inconsistencies after HBase is up.  As the hbase user, run "hbase hbck -details". If there are inconsistencies reported, normally I would use the "ERROR" messages from the hbck output to decide on the best repair method, but since you were willing to start over just run "hbase hbck -repair".

 

If the above fails, you can always try the offline Meta repair:

hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair

 

[1] http://hbase.apache.org/book.html#trouble.tools
[2] http://hbase.apache.org/book.html#hbck.in.depth



David Wilder, Community Manager


Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.

Learn more about the Cloudera Community:

Terms of Service

Community Guidelines

How to use the forum

View solution in original post

18 REPLIES 18

avatar
Expert Contributor

When you run OfflineMetaRepair, most likely you will run it from your userid or root.  Then we may get some opaque errors like "java.lang.AbstractMethodError: org.apache.hadoop.hbase.ipc.RpcScheduler.getWriteQueueLength()". 

 

If you check in HDFS, you may see that the meta directory is no longer owned by hbase:

 

$ hdfs dfs -ls /hbase/data/hbase/
Found 2 items
drwxr-xr-x   - root  hbase          0 2017-09-12 13:58 /hbase/data/hbase/meta
drwxr-xr-x   - hbase hbase          0 2016-06-15 15:02 /hbase/data/hbase/namespace

Manually chown -R it and restart HBase fixed it for me.

 

 

avatar
Contributor

I have done 4 different upgrades (on 4 different clusters) and I get this error everytime. I have to wipe out /hbase and lose the data which is the exact opposite reason on why I am doing an upgrade. 

 

There must be a step missing from the upgrade instructions, I have followed them each time. 

 

I tried your solution here, and it doesn't work. 

 

When I try to restart HBase the master failes and I get this: 

 

 

Failed to become active master
org.apache.hadoop.hbase.util.FileSystemVersionException: HBase file layout needs to be upgraded. You have version null and I want version 8. Consult http://hbase.apache.org/book.html for further information about upgrading HBase. Is your hbase.rootdir valid? If so, you may need to run 'hbase hbck -fixVersionFile'.

 

I run hbase hdck -fixVersionFile and it gets stuck on 

5/09/28 20:51:58 INFO client.RpcRetryingCaller: Call exception, tries=14, retries=35, started=128696 ms ago, cancelled=false, msg=

 

 

avatar
New Contributor

If you delete the /hbase directory in zookeeper, you might be able to keep the data.

avatar
Contributor

"If you delete the /hbase directory in zookeeper, you might be able to keep the data."

 

Thanks for response. I am not sure how I delete this just in zookeeper. Is there a command for that? 

avatar
Rising Star

You have to use the command line.

Should be something like this:

 

#Start the command line and connect to any of the zk servers

# if you are not using CDH then the command is zk-cli.sh

#if your clluster is kerberized you need to kinit before, otherwise the delete will fail

zookeeper-client -server localhost:2181

 

#Once in the shell run this to delete the directory with metadata

rmr /hbase

 

avatar
Guru

Hey everyone, this is a great thread and I might be showing my "HBase age" here with old advice, but unless something has changed in recent versions of HBase, you cannot use these steps if you are using HBase replication.  

 

The replication counter which stores the progress of your synchronization between clusters is stored as a znode under /hbase/replication in Zookkeeper, so you'll completely blow away your replication if you do an "rmr /hbase".

 

Please be super careful with these instructions.  And to answer @Amanda 's question in this thread about why this happens with each upgrade, this RIT problem usually appears if HBase was not cleanly shut down.  Maybe you're trying to upgrade or move things around while HBase is still running?

avatar
Contributor

HI there Clint-

 

What you would suggest be done when HBase gets a region stuck in transition? I am all ears! 

 

Thanks! 

 

Amanda 

avatar
Guru

Well, it's been a couple years since I supported HBase, but what we used to do is delete all the znodes in the /hbase directory in ZK EXCEPT for the /hbase/replication dir.  You just have to be a little more surgical with what you're deleting in that RIT situation, IF you're using the Hbase replication feature to back your cluster up to a secondary cluster.  If not, the previous advice is fine.

 

Ultimately, regions should not get stuck in transition, though.  What version of HBase are you running?  We used to have tons of bugs in older versions that would cause this situation, but those should be resolved long ago.

avatar
Rising Star

Nowadays there is a "clean" operation in the shell admin utilities that can be used to remove data files, zk data or both.

I guess that tool has in consideration what you are pointing out