Member since
06-18-2018
12
Posts
1
Kudos Received
0
Solutions
02-24-2021
02:45 AM
That is no solution. The solution would be to remove the paywall for open source software!
... View more
02-23-2021
04:51 AM
I've been trying to get information on how to get a subscription for over 3 weeks. I received an email from the sales team with an appointment to call, but no one called. Since then I have written to the contact person several times and received no answer. What's going on at Cloudera?
... View more
Labels:
- Labels:
-
Cloudera Manager
02-16-2021
02:02 AM
maybe this helps: https://issues.apache.org/jira/browse/AMBARI-25617
... View more
02-15-2021
05:36 AM
Unfortunately, that doesn't answer. I had already contacted the sales department but had not received any feedback so far.
... View more
02-13-2021
03:03 PM
Hi, I've been using CDH 6.3.2 for several years. For this I now need a subcription. I contacted the support a few days ago, but there was still no answer. How long does it take to get access to the private repository? With best regards buddelflinktier
... View more
Labels:
08-26-2020
08:19 AM
Hello, no, this does not answer my question. My table contains holes and overlaps and these errors can only be repaired with hbck2 fixMeta. This function is only available from Hbase version 2.1.6 or 2.2.1. See also HBCK2 documentation: fixMeta
Do a server-side fix of bad or inconsistent state in hbase:meta.
Available in hbase 2.2.1/2.1.6 or newer versions. Master UI has
matching, new 'HBCK Report' tab that dumps reports generated by
most recent run of _catalogjanitor_ and a new 'HBCK Chore'. It
is critical that hbase:meta first be made healthy before making
any other repairs. Fixes 'holes', 'overlaps', etc., creating
(empty) region directories in HDFS to match regions added to
hbase:meta. Command is NOT the same as the old _hbck1_ command
named similarily. Works against the reports generated by the last
catalog_janitor and hbck chore runs. If nothing to fix, run is a
noop. Otherwise, if 'HBCK Report' UI reports problems, a run of
fixMeta will clear up hbase:meta issues. See 'HBase HBCK' UI
for how to generate new report.
... View more
08-20-2020
09:27 AM
Hello, how can I update HBase 2.1.0 (CDH 6.3.2) to a version that fully supports HBCK2 (for example 2.1.6)? I can't repair an inconsistent table ("There is an overlap in the region chain."). The old HBCK doesn't work with 2.1.0 and HBCK2 from "hbase operator tools" does not work with 2.1.0 Best regards buddelflinktier
... View more
Labels:
- Labels:
-
Apache HBase
-
Cloudera Manager
03-16-2020
02:46 AM
1 Kudo
Hello, I use CDH 6.3.2 with HBase 2.1.0-cdh6.3.2. "hbase hbck" reports "ERROR: There is a hole in the region chain between .... You need to create a new .regioninfo and region dir in hdfs to plug the hole." "hbase hbck -fixMeta" reports "ERROR: option '-fixMeta' is not supportted!". "hbase hbck -j hbase-hbck2-1.1.0-SNAPSHOT.jar fixMeta" reports "java.lang.UnsupportedOperationException: fixMeta not supported on server version=2.1.0-cdh6.3.2; needs at least a server that matches or exceeds [2.0.6, 2.1.6, 2.2.1, 2.3.0, 3.0.0]". "skip hbase version check" doesn't help (reports "Exception in thread "main" java.lang.NoSuchMethodError: org.apache.hadoop.hbase.client.Hbck.fixMeta()V"). How can I repair this table? ----------------- buddelflinktier
... View more
Labels:
- Labels:
-
Apache HBase
-
Cloudera Manager
07-08-2019
12:53 AM
Hello, I have CDH 5.16.2 with HBase 1.2.0 and a new Cluster with CDH 6.2.0 with HBase 2.1.0. I would like to use CopyTable to copy data into the new cluster. The containers give an error: ERROR [main] org.apache.hadoop.hbase.client.AsyncProcess: Failed to get region location
org.apache.hadoop.hbase.exceptions.UnknownProtocolException: org.apache.hadoop.hbase.exceptions.UnknownProtocolException: Is this a pre-hbase-1.0.0 or asynchbase client? Client is invoking getClosestRowBefore removed in hbase-2.0.0 replaced by reverse Scan.
at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2452)
at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41998)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
at sun.reflect.GeneratedConstructorAccessor6.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:526)
at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:106)
at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:95)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRemoteException(ProtobufUtil.java:332)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1598)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegionInMeta(ConnectionManager.java:1418)
at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.locateRegion(ConnectionManager.java:1218)
at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:410)
at org.apache.hadoop.hbase.client.AsyncProcess.submit(AsyncProcess.java:359)
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.backgroundFlushCommits(BufferedMutatorImpl.java:244)
at org.apache.hadoop.hbase.client.BufferedMutatorImpl.close(BufferedMutatorImpl.java:169)
at org.apache.hadoop.hbase.mapreduce.TableOutputFormat$TableRecordWriter.close(TableOutputFormat.java:120)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.close(MapTask.java:676)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:799)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: org.apache.hadoop.hbase.ipc.RemoteWithExtrasException(org.apache.hadoop.hbase.exceptions.UnknownProtocolException): org.apache.hadoop.hbase.exceptions.UnknownProtocolException: Is this a pre-hbase-1.0.0 or asynchbase client? Client is invoking getClosestRowBefore removed in hbase-2.0.0 replaced by reverse Scan.
at org.apache.hadoop.hbase.regionserver.RSRpcServices.get(RSRpcServices.java:2452)
at org.apache.hadoop.hbase.shaded.protobuf.generated.ClientProtos$ClientService$2.callBlockingMethod(ClientProtos.java:41998)
at org.apache.hadoop.hbase.ipc.RpcServer.call(RpcServer.java:413)
at org.apache.hadoop.hbase.ipc.CallRunner.run(CallRunner.java:130)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:324)
at org.apache.hadoop.hbase.ipc.RpcExecutor$Handler.run(RpcExecutor.java:304)
at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1272)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:227)
at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:336)
at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.get(ClientProtos.java:34070)
at org.apache.hadoop.hbase.protobuf.ProtobufUtil.getRowOrBefore(ProtobufUtil.java:1594) How can I carry the data (about 80 TB)?
... View more
Labels:
- Labels:
-
Apache HBase