Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

hbase Log split is not happening properly when all nodes in the cluster is down

hbase Log split is not happening properly when all nodes in the cluster is down

New Contributor

Hi,

The hbase log split is taking too long and the region servers are not coming to online when all the nodes in the cluster went to down due to some phyisical connectivity in the switch.

Due to this the root and meta tables are not accessible and we cannot access all the user created hbase tables.

Please find the hbase logs for your reference,

2015-06-22 15:46:42,161 INFO org.apache.hadoop.hbase.regionserver.wal.HLogSplitter: Processed 0 edits across 0 regions threw away edits for 0 regions; log file=hdfs://CRSCLUSTER/hbase/.logs/rj1hsl3,60020,1433932576753-splitting/rj1hsl3%2C60020%2C1433932576753.1433936178264 is corrupted = false progress failed = false

2015-06-22 15:46:42,161 WARN org.apache.hadoop.hbase.regionserver.SplitLogWorker: log splitting of hdfs://CLUSTER/hbase/.logs/rj1hsl3,60020,1433932576753-splitting/rj1hsl3%2C60020%2C1433932576753.1433936178264 failed, returning error

java.io.IOException: Could not obtain the last block locations.
at org.apache.hadoop.hdfs.DFSInputStream.openInfo(DFSInputStream.java:224)
at org.apache.hadoop.hdfs.DFSInputStream.<init>(DFSInputStream.java:198)
at org.apache.hadoop.hdfs.DFSClient.open(DFSClient.java:1117)
at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:248)
at org.apache.hadoop.hdfs.DistributedFileSystem.open(DistributedFileSystem.java:81)
at org.apache.hadoop.io.SequenceFile$Reader.openFile(SequenceFile.java:1787)
at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.openFile(SequenceFileLogReader.java:62)

at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1707)
at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1728)
at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader$WALReader.<init>(SequenceFileLogReader.java:55)

at org.apache.hadoop.hbase.regionserver.wal.SequenceFileLogReader.init(SequenceFileLogReader.java:177)
at org.apache.hadoop.hbase.regionserver.wal.HLog.getReader(HLog.java:713)
at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:825)
at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.getReader(HLogSplitter.java:738)
at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:382)
at org.apache.hadoop.hbase.regionserver.wal.HLogSplitter.splitLogFile(HLogSplitter.java:350)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker$1.exec(SplitLogWorker.java:115)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker.grabTask(SplitLogWorker.java:283)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker.taskLoop(SplitLogWorker.java:214)
at org.apache.hadoop.hbase.regionserver.SplitLogWorker.run(SplitLogWorker.java:182)
at java.lang.Thread.run(Thread.java:662)


version details
---------------
Hadoop 2.0.0-cdh4.3.1
HBase 0.94.6-cdh4.3.1

how can we fix this issue?

Thanks.

1 REPLY 1
Highlighted

Re: hbase Log split is not happening properly when all nodes in the cluster is down

The log splitting is not proceeding because HDFS seems down. Have you restarted ZooKeeper and HDFS?
Regards,
Gautam Gopalakrishnan