Member since
10-22-2015
241
Posts
86
Kudos Received
20
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2469 | 03-11-2018 12:01 AM | |
1506 | 01-03-2017 10:48 PM | |
1900 | 12-20-2016 11:11 PM | |
3704 | 09-03-2016 01:57 AM | |
1416 | 09-02-2016 04:55 PM |
06-13-2016
10:45 PM
Quite often, hbase users face the following exception when submitting job(s) to secure hbase cluster: 2016-05-27 21:16:06,806 WARN [hconnection-0x6be9ad6c-metaLookup-shared--pool2-t1] org.apache.hadoop.hbase.ipc.AbstractRpcClient: Exception encountered while connecting to the server : javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] 2016-05-27 21:16:06,806 FATAL [hconnection-0x6be9ad6c-metaLookup-shared--pool2-t1] org.apache.hadoop.hbase.ipc.AbstractRpcClient: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'.
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:212)
at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:611)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$600(RpcClientImpl.java:156)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:737)
at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:734)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1657) The above shows an error setting up the HBase connection. If the above happens in tasks, then it doesn't have a delegation token or it's not seeing it. TableMapReduceUtil#initCredentials() should be called before submitting the job.
... View more
Labels:
06-12-2016
02:32 PM
What is the access pattern for the web application ? HBase has Bucket cache (off heap) along with block cache (on heap). After tuning, hbase can deliver good caching performance. However, few people use it as in-memory caching solution. HDP currently doesn't support Apache Ignite or Apache Geode.
... View more
06-11-2016
03:11 PM
If you can share your use case more, we would be able to provide more advice.
... View more
06-11-2016
03:11 PM
2 Kudos
You can choose hbase as storage. HBase can easily handle hundreds of columns. Consider grouping the columns normally accessed together in the same column family.
... View more
06-09-2016
08:59 PM
1 Kudo
See also http://kylin.apache.org/docs15/gettingstarted/best_practices.html
... View more
06-09-2016
08:54 PM
1 Kudo
Have you looked at Apache Kylin (which is built on top of hbase) ? http://kylin.apache.org/
... View more
06-07-2016
04:21 PM
1. Do I need to install Region Server on all Datanodes? This depends a few factors, such as the total number of regions in your cluster. Each region server can host hundreds of regions. Determine the number of region servers (which may be lower than the number of Data Nodes). 2. If I don't install Region Servers on all the datanodes, what will be the impact? Some data access (see Josh's answer above) would not be able to utilize the short circuit reads.
... View more
06-07-2016
12:30 AM
Default was used. Please bump it to 90 seconds or higher.
... View more
06-07-2016
12:16 AM
Please also check region servers to see if there was critical error during the copy.
... View more