Member since
04-22-2016
67
Posts
6
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5357 | 11-14-2017 11:43 AM | |
1913 | 10-21-2016 05:14 AM |
12-06-2016
11:52 AM
Rather than using the Cloudera JDBC driver, HortonWorks provides drivers at http://hortonworks.com/downloads/#data-platform
... View more
11-15-2016
10:38 AM
Thanks for the info. I've spoken to my manager, we're going to upgrade to 2.4.3
... View more
11-14-2016
12:38 PM
1 Kudo
I am running a cluster with HDP 2.4.2 and HBase 1.1.2. I frequently (about once a day) have region server failure, and sometimes this involves a number of servers failing. I have looked in the logs, and a common cause of failure is the following error: java.lang.NullPointerException
at org.apache.hadoop.hbase.regionserver.HRegion.getOldestHfileTs(HRegion.java:1633)
at org.apache.hadoop.hbase.regionserver.HRegionServer.createRegionLoad(HRegionServer.java:1465)
at org.apache.hadoop.hbase.regionserver.HRegionServer.buildServerLoad(HRegionServer.java:1189)
at org.apache.hadoop.hbase.regionserver.HRegionServer.tryRegionServerReport(HRegionServer.java:1132)
at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:949)
at java.lang.Thread.run(Thread.java:745)
This seems to be related to the following JIRA: https://issues.apache.org/jira/browse/HBASE-14798. This does provide a patch as far back as 1.1.4, but since we're using 1.1.2 the patch does not apply to my version. Does anybody know how I can avoid or fix this problem on this version of HBase? Thanks in advance.
... View more
Labels:
- Labels:
-
Apache HBase
10-21-2016
05:14 AM
@Josh Elser It wasn't actually giving me any clears errors or exceptions, simply hanging. However I succesfully fixed it by simplifying my scan as below: Scan scan = new Scan(startRowString.getBytes(), endRowString.getBytes()); scan.addFamily("f1".getBytes()); ResultScanner scanner = table.getScanner(scan); for (Result r : scanner) {...} Based on that the problem seemed to be on the HBase side rather than Storm.
... View more
10-13-2016
08:51 AM
I am writing a Storm topology to read data from HBase using DRPC. Essentially this performs a scan to get data, enriches the data and returns it. I can easily get a basic DRPC example working (based on http://storm.apache.org/releases/current/Distributed-RPC.html). However when I insert the code for the scan, the process takes a very long time. After a minute, I get the following error: backtype.storm.generated.DRPCExecutionException at backtype.storm.daemon.drpc$service_handler$reify__8688.failRequest(drpc.clj:136) ~[storm-core-0.10.0.2.4.2.0-258.jar:0.10.0.2.4.2.0-258] at backtype.storm.drpc.DRPCSpout.fail(DRPCSpout.java:241) ~[storm-core-0.10.0.2.4.2.0-258.jar:0.10.0.2.4.2.0-258 A short while later, I get org.apache.hadoop.hbase.client.RetriesExhaustedException. This doesn't always happen, but is very common. My assumption based on this is one of two possibilities: The scan is timing out. However performing the scan through HBase Shell or REST return in less than a second The table is inconsistent, causing a certain region to be missing. I have run hbase hbck and it shows 0 inconsistencies. I know that the connection to HBase is fine: I have added debugging output and the bolt gets the results. However due to the DRPCExecutionException, these results are never returned over DRPC. I though the issue was DRPC timeout, however I have increased the DRPC timeout a lot and I get the same result in the same amount of time. After Googling I found someone else with the same issue (http://stackoverflow.com/questions/35940623/stormdrpc-request-failed) but there is no indication of how to fix this. For reference I am adding my code below: try (Table table = HbaseClient.connection().getTable(TableName.valueOf("EPG_URI")))
{
List<Filter> filters = new ArrayList<>();
String startRowString = "start";
String endRowString = "end";
RowFilter startRow = new RowFilter(CompareFilter.CompareOp.GREATER_OR_EQUAL, new BinaryPrefixComparator(startRowString.getBytes()));
filters.add(startRow);
RowFilter endRow = new RowFilter(CompareFilter.CompareOp.LESS_OR_EQUAL, new BinaryPrefixComparator(endRowString.getBytes()));
filters.add(endRow);
FilterList filterList = new FilterList(FilterList.Operator.MUST_PASS_ALL, filters);
Scan scan = new Scan();
scan.addFamily("f1".getBytes());
scan.setFilter(filterList);
ResultScanner scanner = table.getScanner(scan);
for (Result result : scanner)
{
hbaseValues.add(result);
}
}
}
Thanks in advance for the help.
... View more
Labels:
- Labels:
-
Apache HBase
-
Apache Storm
07-18-2016
04:37 AM
I had already tried hbase hbck -repair as well as -repairHoles prior to posting the question, with no success. We had some problems with HDFS preceding this issue. HDFS showed itself as healthy, but it had previously been corrupt. I believe this was the underlying cause of the issue. We now have HBase stable again. I added a comment to the accepted answer explaining how I solved the issue on my side. Thanks for the help.
... View more
07-15-2016
09:36 AM
@Rajeshbabu Chintaguntla Thank you so much for your help. The FileNotFound was referencing a different region to the one it was loading, and the issue was due to reference files. I moved each of these directories out of /apps/hbase (there were only four, so it was easy). After that I ran OfflineMetaRepair. Once I started HBase it loaded every region as it should. As a precaution I ran hbase hbck -repair and hbase hbck -repairHoles after this, and everything is fine now. Data is available for both reading and writing, and there are no regions in transition. Once again, thank you for your help.
... View more
07-15-2016
08:26 AM
I have an 8 node cluster running HDP 2.4. I currently have 4 regions on a large table that are stuck in the FAILED_OPEN state. When I check the logs for the regions servers I see that there is a FileNotFoundException, indicating (I believe) that the HFile does not exist. I have tried an OfflineMetaRepair in order to remove the entries but this did not help. The directories for these regions exist, but they do not contain any data. Can anybody suggest a way to repair this? If I need to perform manual surgery on the META file, can someone guide me to do this correctly?
... View more
Labels:
- Labels:
-
Apache HBase
07-14-2016
11:22 AM
You inadvertently solved my problem. I had not seen that the HBase Master tells you which server it is trying to load to. I pulled up region server logs and found the following line: org.apache.hadoop.security.AccessControlException: Permission denied We had mistakenly changed the owner of /apps/hbase to hdfs, meaning that the hbase user could not write. We did hdfs dfs -chown -R hbase /apps/hbase and this has allowed the regions to be correctly assigned. Really appreciate your help. For what it's worth, we're running HDP 2.4
... View more
07-14-2016
10:41 AM
I have an HBase cluster which is having a major problem at the moment. My namespace and META tables appear to be working correctly. However the regions for my table are not being deployed on region servers. Instead they become stuck in the FAILED_OPEN state, often for longer than 20 minutes. Since they are classed as regions in transition balancing fails and cannot help. I have searched the log and there doesn't seem to be anything useful. I have tried the following: hbase hbck -repair table_name hbase hbck -repairHoles table_name hbase hbck -fixMeta -fixAssignment table_name assign region_name | hbase shell None of these has helped. I have checked that HDFS is not corrupt, hdfs fsck / says it's healthy. When running hbase hbck -details table_name, the only inconsistencies listed are the fact that the regions are not deployed. I saw a recommendation online and followed it, doing the following:
1. Stop HBase 2. Use a zookeeper cli and run "rmr /hbase" to delete the HBase znodes 3. Run offlineMetaRepair 4. Restart HBase. It will recreate the znodes This still does not solve my problem. Is there anything more that anybody can suggest? I don't want to truncate the tables since we have over 2 months worth of data which we need to keep
... View more
Labels:
- Labels:
-
Apache HBase
- « Previous
-
- 1
- 2
- Next »