Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Beeline kerberos connection error at quickstart vm

avatar
New Contributor

The setup

- host is mac osx

- cloudera quickstart vm (192.168.99.100) with hostname as "quickstart.cloudera"

- another centos vm (192.168.99.101) with hostname as "osboxes"

 

On the quickstart vm, i am able to run the beeline command and view the default database with the command below. I do a kinit , followed by this command 

 

> beeline -u "jdbc:hive2://quickstart.cloudera:10000/default;principal=hive/quickstart.cloudera@CLOUDERA;auth=kerberos"

 

 

On the centos vm 

- my krb5.conf at /etc/krb5.conf

[libdefaults]
 default_realm = CLOUDERA
 dns_lookup_kdc = false
 dns_lookup_realm = false
 ticket_lifetime = 86400
 renew_lifetime = 604800
 forwardable = true
 default_tgs_enctypes = aes256-cts-hmac-sha1-96
 default_tkt_enctypes = aes256-cts-hmac-sha1-96
 permitted_enctypes = aes256-cts-hmac-sha1-96
 udp_preference_limit = 1
 kdc_timeout = 3000

[realms]
 CLOUDERA = {
     kdc = quickstart.cloudera
     admin_server = quickstart.cloudera
 }

[domain_realm]

 

In my /etc/hosts i have pointed to quickstart vm

192.168.99.100 quickstart.cloudera

 

Step 1) I create a principal for user at centosvm on quickstart vm. At quickstart vm i do:

[cloudera@quickstart ~]$ sudo kadmin 
Authenticating as principal cloudera-scm/admin@CLOUDERA with password.
Password for cloudera-scm/admin@CLOUDERA: 
kadmin:  addprinc sc@CLOUDERA
WARNING: no policy specified for sc@CLOUDERA; defaulting to no policy
Enter password for principal "sc@CLOUDERA": 
Re-enter password for principal "sc@CLOUDERA": 
Principal "sc@CLOUDERA" created.
kadmin:  q

 

Step 2) I do kinit from centos vm

[sc@osboxes apache-hive-2.1.1-bin]$ kinit sc@CLOUDERA
Password for sc@CLOUDERA: 
[sc@osboxes apache-hive-2.1.1-bin]$ klist
Ticket cache: FILE:/tmp/krb5cc_1001
Default principal: sc@CLOUDERA

Valid starting       Expires              Service principal
11/16/2017 07:18:32  11/17/2017 07:18:32  krbtgt/CLOUDERA@CLOUDERA
	renew until 11/23/2017 07:18:32

Step 3) I have downloaded the beeline binaries at centos vm and did no changes to conf files. I run

[sc@osboxes apache-hive-2.1.1-bin]$ ./bin/beeline -u "jdbc:hive2://quickstart.cloudera:10000/default;principal=hive/quickstart.cloudera@CLOUDERA;auth=kerberos"
which: no hbase in (/usr/local/apache-maven/bin:/usr/local/maven/bin:/usr/local/ant/bin:/usr/local/gradle/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/sc/.local/bin:/home/sc/bin)
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/sc/apache-hive-2.1.1-bin/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/lib/zookeeper/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Connecting to jdbc:hive2://quickstart.cloudera:10000/default;principal=hive/quickstart.cloudera@CLOUDERA;auth=kerberos
17/11/16 07:26:03 [main]: ERROR transport.TSaslTransport: SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) ~[?:1.7.0_131]
        at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94) ~[hive-exec-2.1.1.jar:2.1.1]
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271) [hive-exec-2.1.1.jar:2.1.1]
        at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37) [hive-exec-2.1.1.jar:2.1.1]
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52) [hive-exec-2.1.1.jar:2.1.1]
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49) [hive-exec-2.1.1.jar:2.1.1]
        at java.security.AccessController.doPrivileged(Native Method) ~[?:1.7.0_131]
        at javax.security.auth.Subject.doAs(Subject.java:421) [?:1.7.0_131]
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1917) [hadoop-common-2.6.0-cdh5.12.1.jar:?]
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49) [hive-exec-2.1.1.jar:2.1.1]
        at org.apache.hive.jdbc.HiveConnection.openTransport(HiveConnection.java:227) [hive-jdbc-2.1.1.jar:2.1.1]
        at org.apache.hive.jdbc.HiveConnection.<init>(HiveConnection.java:182) [hive-jdbc-2.1.1.jar:2.1.1]
        at org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:107) [hive-jdbc-2.1.1.jar:2.1.1]
        at java.sql.DriverManager.getConnection(DriverManager.java:571) [?:1.7.0_131]
        at java.sql.DriverManager.getConnection(DriverManager.java:187) [?:1.7.0_131]
        at org.apache.hive.beeline.DatabaseConnection.connect(DatabaseConnection.java:145) [hive-beeline-2.1.1.jar:2.1.1]
        at org.apache.hive.beeline.DatabaseConnection.getConnection(DatabaseConnection.java:209) [hive-beeline-2.1.1.jar:2.1.1]
at org.apache.hive.beeline.Commands.connect(Commands.java:1469) [hive-beeline-2.1.1.jar:2.1.1]
        at org.apache.hive.beeline.Commands.connect(Commands.java:1364) [hive-beeline-2.1.1.jar:2.1.1]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_131]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_131]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_131]
        at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_131]
        at org.apache.hive.beeline.ReflectiveCommandHandler.execute(ReflectiveCommandHandler.java:54) [hive-beeline-2.1.1.jar:2.1.1]
        at org.apache.hive.beeline.BeeLine.execCommandWithPrefix(BeeLine.java:1104) [hive-beeline-2.1.1.jar:2.1.1]
        at org.apache.hive.beeline.BeeLine.dispatch(BeeLine.java:1143) [hive-beeline-2.1.1.jar:2.1.1]
        at org.apache.hive.beeline.BeeLine.initArgs(BeeLine.java:783) [hive-beeline-2.1.1.jar:2.1.1]
        at org.apache.hive.beeline.BeeLine.begin(BeeLine.java:862) [hive-beeline-2.1.1.jar:2.1.1]
        at org.apache.hive.beeline.BeeLine.mainWithInputRedirection(BeeLine.java:502) [hive-beeline-2.1.1.jar:2.1.1]
        at org.apache.hive.beeline.BeeLine.main(BeeLine.java:485) [hive-beeline-2.1.1.jar:2.1.1]
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.7.0_131]
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[?:1.7.0_131]
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.7.0_131]
        at java.lang.reflect.Method.invoke(Method.java:606) ~[?:1.7.0_131]
        at org.apache.hadoop.util.RunJar.run(RunJar.java:221) [hadoop-common-2.6.0-cdh5.12.1.jar:?]
        at org.apache.hadoop.util.RunJar.main(RunJar.java:136) [hadoop-common-2.6.0-cdh5.12.1.jar:?]
Caused by: org.ietf.jgss.GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
        at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147) ~[?:1.7.0_131]
        at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:121) ~[?:1.7.0_131]
        at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187) ~[?:1.7.0_131]
        at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:223) ~[?:1.7.0_131]
        at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212) ~[?:1.7.0_131]
        at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) ~[?:1.7.0_131]
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192) ~[?:1.7.0_131]
        ... 35 more
17/11/16 07:26:03 [main]: WARN jdbc.HiveConnection: Failed to connect to quickstart.cloudera:10000
Unknown HS2 problem when communicating with Thrift server.
Error: Could not open client transport with JDBC Uri: jdbc:hive2://quickstart.cloudera:10000/default;principal=hive/quickstart.cloudera@CLOUDERA;auth=kerberos: GSS initiate failed (state=08S01,code=0)
Beeline version 2.1.1 by Apache Hive

 

What i am doing wrong here and not able to connect from the centos vm ??

 

 

5 REPLIES 5

avatar
New Contributor

I have the same problem on my cluster. Tried to connect hive via beeline with below:

 

!connect jdbc:hive2://hostname:10000/default;ssl=true;sslTrustStore=/opt/cloudera/security/jks/cm.truststore;trustStorePassword=......;principal=hive/hostname@realm oracle org.apache.hive.jdbc.HiveDriver

 

But getting below. Unknown HS2 problem when communicating with Thrift server. Did you find a solution for this problem?

 

Br,

Sercan

avatar
Master Guru

@scld,

 

Since you are using AES256, the most likely reason beeline cannot find any TGT is that the Unlimited JCE policy file is not installed in the JDK that beeline is using.

 

Also, why did you download the binaries?  beeline ships with CDH.

 

@srcnblgc,

 

Please show us the full command you are using and the output as it appears on the screen.  It is hard to tell what you are doing/seeing.

 

Check the HiveServer2 logs, as well, to see if there are errors or exceptions when you are having difficulty connecting.

avatar
New Contributor

Hi,

Basically, I am trying to connect hive on kerberos enabled environment. Sharing all the details that I am using:

 

Here I have the keytab:

 

[root@bdw1n10 keytabs]# ls -lrt
total 8
-r--r----- 1 hdfs hadoop 1024 Feb 13 13:35 http_secret
-rw------- 1 root root 496 Feb 20 13:32 hive.keytab
[root@bdw1n10 keytabs]#

Here I have the valid ticket

 

 

[root@bdw1n10 keytabs]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hive/bdw1n10.bnet.luxds.net@INTBDA.BIL.COM
Valid starting Expires Service principal
02/20/18 13:32:30 02/21/18 13:32:30 krbtgt/INTBDA.BIL.COM@INTBDA.BIL.COM
renew until 02/25/18 13:32:30

 

Here how I am trying to connect to beeline:

beeline> !connect jdbc:hive2://bdw1n10.bnet.luxds.net:10000/default;ssl=true;sslTrustStore=/opt/cloudera/security/jks/BDWCLUINT.truststore;trustStorePassword=mf2cy1fMiH6oRlcVBfWPsX5FyzzeDCdTynZQlOoxRrVcu4headReAAna1V2VxCMd;principal=hive/bdw1n10.bnet.luxds.net@INTBDA.BIL.COM oracle org.apache.hive.jdbc.HiveDriver
scan complete in 1ms
Connecting to jdbc:hive2://bdw1n10.bnet.luxds.net:10000/default;ssl=true;sslTrustStore=/opt/cloudera/security/jks/BDWCLUINT.truststore;trustStorePassword=mf2cy1fMiH6oRlcVBfWPsX5FyzzeDCdTynZQlOoxRrVcu4headReAAna1V2VxCMd;principal=hive/bdw1n10.bnet.luxds.net@INTBDA.BIL.COM
Unknown HS2 problem when communicating with Thrift server.
Error: Could not open client transport with JDBC Uri: jdbc:hive2://bdw1n10.bnet.luxds.net:10000/default;ssl=true;sslTrustStore=/opt/cloudera/security/jks/BDWCLUINT.truststore;trustStorePassword=mf2cy1fMiH6oRlcVBfWPsX5FyzzeDCdTynZQlOoxRrVcu4headReAAna1V2VxCMd;principal=hive/bdw1n10.bnet.luxds.net@INTBDA.BIL.COM: Could not connect to bdw1n10.bnet.luxds.net on port 10000 (state=08S01,code=0)
beeline> 

Here is the error:

        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed
        at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:199)
        at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
        at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
        ... 10 more
2018-02-20 12:11:16,000 ERROR org.apache.thrift.transport.TSaslTransport: [HiveServer2-Handler-Pool: Thread-141]: SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: Failure unspecified at GSS-API level (Mechanism level: Checksum failed)]
        at com.sun.security.sasl.gsskerb.GssKrb5Server.evaluateResponse(GssKrb5Server.java:199)
        at org.apache.thrift.transport.TSaslTransport$SaslParticipant.evaluateChallengeOrResponse(TSaslTransport.java:539)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:283)
        at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:793)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:790)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:360)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1897)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:790)
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: GSSException: Failure unspecified at GSS-API level (Mechanism level: Checksum failed)
        at sun.security.jgss.krb5.Krb5Context.acceptSecContext(Krb5Context.java:856)
        at sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:342)
        at sun.security.jgss.GSSContextImpl.acceptSecContext(GSSContextImpl.java:285)
        at com.sun.security.sasl.gsskerb.GssKrb5Server.evaluateResponse(GssKrb5Server.java:167)
        ... 14 more
Caused by: KrbException: Checksum failed
        at sun.security.krb5.internal.crypto.Aes256CtsHmacSha1EType.decrypt(Aes256CtsHmacSha1EType.java:102)
        at sun.security.krb5.internal.crypto.Aes256CtsHmacSha1EType.decrypt(Aes256CtsHmacSha1EType.java:94)
        at sun.security.krb5.EncryptedData.decrypt(EncryptedData.java:175)
        at sun.security.krb5.KrbApReq.authenticate(KrbApReq.java:281)
        at sun.security.krb5.KrbApReq.<init>(KrbApReq.java:149)
        at sun.security.jgss.krb5.InitSecContextToken.<init>(InitSecContextToken.java:108)
        at sun.security.jgss.krb5.Krb5Context.acceptSecContext(Krb5Context.java:829)
        ... 17 more
Caused by: java.security.GeneralSecurityException: Checksum failed
        at sun.security.krb5.internal.crypto.dk.AesDkCrypto.decryptCTS(AesDkCrypto.java:451)
        at sun.security.krb5.internal.crypto.dk.AesDkCrypto.decrypt(AesDkCrypto.java:272)
        at sun.security.krb5.internal.crypto.Aes256.decrypt(Aes256.java:76)
        at sun.security.krb5.internal.crypto.Aes256CtsHmacSha1EType.decrypt(Aes256CtsHmacSha1EType.java:100)
        ... 23 more
2018-02-20 12:11:16,002 ERROR org.apache.thrift.server.TThreadPoolServer: [HiveServer2-Handler-Pool: Thread-141]: Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: GSS initiate failed
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:793)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:790)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:360)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1897)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:790)
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.thrift.transport.TTransportException: GSS initiate failed
        at org.apache.thrift.transport.TSaslTransport.sendAndThrowMessage(TSaslTransport.java:232)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:316)
        at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
        ... 10 more
2018-02-20 12:15:53,868 ERROR org.apache.thrift.server.TThreadPoolServer: [HiveServer2-Handler-Pool: Thread-146]: Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:793)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:790)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:360)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1897)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:790)
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed
        at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:199)
        at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
        at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
        ... 10 more

Br,

Sercan

 

 

 

 

avatar
Explorer

We are facing the same issue.Were you  be able to find the root cause and the solution?

Thanks

avatar
New Contributor

Hi Elif,

 

On my case problem is not using the correct ticket. I was exporting ticket everytime and after kinit it was able to get ticket but since time to time I was not using the latest process's ticket.

============================================================================

One example below :

the output of hive.keytab

[root@bdw1n07 sbilgic]# klist -k -t -e hive.keytab
Keytab name: FILE:hive.keytab
KVNO Timestamp Principal
---- ----------------- ----------------------------------------------------------------------------------------------------------------------
13 02/27/18 08:58:51 hive/......................................@...................................... (aes256-cts-hmac-sha1-96)
13 02/27/18 08:58:51 hive/......................................@...................................... (aes128-cts-hmac-sha1-96)
13 02/27/18 08:58:51 hive/......................................@...................................... (des3-cbc-sha1)
13 02/27/18 08:58:51 hive/......................................@...................................... (arcfour-hmac)
13 02/27/18 08:58:51 hive/......................................@...................................... (des-hmac-sha1)
13 02/27/18 08:58:51 hive/......................................@I...................................... (des-cbc-md5)
============================================================================
Clearly, the hive.keytab above has not been generated by Cloudera Manager, instead, it has been created from kadmin or kadmin.local once that happens the keytab generated by Cloudera Manager fails with the checksum.
I used a copy of hive.keytab generated from Cloudera Manager copying it from the process directory.

***Not that the command:
kinit -kt /var/run/cloudera-scm-agent/process/`ls -1 /var/run/cloudera-scm-agent/process | grep HIVESERVER2 | sort -n | tail -1`/hive.keytab hive/$(hostname -f)

kinit with the latest process directory for hive from /var/run/cloudera-scm-agent/process/

***the latest process directory is collected with the command below:
ls -ltr /var/run/cloudera-scm-agent/process/ | grep HIVESERVER2

***Note that the hive.keytab under the process directory
/var/run/cloudera-scm-agent/process/NNN-hive-HIVESERVER2/hive.keytab

Has principals for hive and HTTP once the customer has configured HiveServer2 WebUI. So, if you are doing, do not export keytab from kadmin or kadmin.local, unless you are willing to configure Hive to use that keytab. Instead get a copy of the hive.keytab from the process directory: /var/run/cloudera-scm-agent/process/NNN-hive-HIVESERVER2/hive.keytab

Please let me know if you have further questions.