Member since
07-17-2019
738
Posts
432
Kudos Received
111
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1502 | 08-06-2019 07:09 PM | |
1730 | 07-19-2019 01:57 PM | |
2090 | 02-25-2019 04:47 PM | |
2974 | 10-11-2018 02:47 PM | |
787 | 09-26-2018 02:49 PM |
09-05-2019
03:53 AM
The scenario you describe is not relevant for HBase. If you want to build some kind of non-Kerberos based authentication mechanism for HBase, you are welcome to do so. My previous answer is accurate given what authentication mechanisms currently exist in HBase.
... View more
08-08-2019
05:06 PM
Without enabling Kerberos authentication for HBase, any authorization checks you make are pointless. When you don't have Kerberos authentication enabled, there is no guarantee that the end user is who they say they are. This makes authorization pointless. I would focus on getting strong authentication setup before looking more into authorization.
... View more
08-07-2019
04:29 PM
I would start by assuming that no service which relies on HDFS can simply use S3 directly. S3Guard can likely bridge the gap for most systems (HBase is an exception), but I cannot tell you the requirements for every service in existence.
... View more
08-06-2019
07:09 PM
Blob stores do not have the same semantics as file systems. HBase relies on very specific semantics with respect to concurrency and atomic operations which most blob stores (including S3) do not provide. One example: a move of some "directory" in an S3 bucket is not atomic whereas this is atomic in HDFS. HBase will 100% not work correctly if you try to configure hbase.rootdir to use S3 via the S3A adapter in Hadoop. EMR has proprietary code in their S3 filesystem access layer, unique from S3A, which does not suffer from this issue somehow.
... View more
07-23-2019
01:42 PM
1 Kudo
You are running against a version of Hadoop which does not have the expected classes that HBase wants to check. I find it very unlikely that you are using Hadoop 3.1.2 on the HBase classpath. HBase relies on very specific semantics from the underlying filesystem to guarantee no data loss. This warning is telling you that HBase failed to make this automatic check and that you should investigate this to make sure that you don't experience data loss going forward.
... View more
07-19-2019
01:57 PM
HBase 0.94 and 0.95 are extremely old versions. You should not be using them any longer. In general, you should use the same exact version of client jars which match the HBase cluster version you are trying to interact with.
... View more
07-15-2019
01:25 PM
If you inspect the Mapper log files, you should be able to find mention of an unparseable row when one is processed. You may have to increase the log level from INFO to DEBUG. Each Mapper is assigned an InputSplit which will be a contiguous group of lines from the input files that you specified (e.g. fileA lines 50 through 200). You can also use this information to work backwards.
... View more
06-26-2019
02:51 PM
As my previous comment says, this is a benign warning message. It does not indicate any problem with the system. If you have RegionServers crashing, your problem lies elsewhere. Would suggest you contact support to help you identify this problem if you are having problems doing so.
... View more
06-18-2019
02:27 PM
This is not an error that will cause any kind of problem with your system. RegionServers are known to be reporting the wrong version string. They should give the appropriate HDP-suffixed version string, but do not.
... View more
06-06-2019
06:54 PM
Please share the code you are using to run this benchmark. If it is using some sensitive data, please reproduce it using non-sensitive data. Without seeing how you are executing the timings, it's near impossible to give any meaningful advice.
... View more