Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Unable to connect to kerberos hbase

avatar
Contributor

Hi,

I am using below principals for hbase kerberos authentication:

  1. hbase.zookeeper.quorum=localhost
  2. hbase.zookeeper.property.clientPort=2181
  3. hadoop.security.authentication=kerberos
  4. hbase.security.authentication=kerberos
  5. hbase.master.kerberos.principal=zookeeper/localhost@EXAMPLE.COM
  6. hbase.regionserver.kerberos.principal=zookeeper/localhost@EXAMPLE.COM
  7. hbase.kerberos.principal=zookeeper/localhost@EXAMPLE.COM
  8. hbase.kerberos.keytab=zkpr.keytab

Now, when i run my spark job on local it is not connecting to hbase, it shows error message:

Unable to connect to zookeeper/localhost@EXAMPLE.COM to zookeeper/localhost@EXAMPLE.COM.

I have done kinit zookeeper/localhost@EXAMPLE.COM -k -t zkpr.keytab and it is running fine.

Any help will be appreciated.

5 REPLIES 5

avatar
Super Guru

To debug this, you should provide:

  • The full exception
  • Reproduce the problem with the JVM property "-Dsun.security.krb5.debug=true" on both client and server (capture those logs)
  • Enable DEBUG Log4j level on the RegionServers and capture relevant logging.

It is somewhat strange that you use "localhost" as the "instance" component of the Kerberos principal. Typically, this is the FQDN for the network interface that the service is listening on. If you are not running everything on 127.0.0.1, you may be running into DNS issues.

avatar
Contributor

Here are the logs:

17/05/30 19:25:06 ERROR RpcClientImpl: SASL authentication failed. The most likely cause is missing or invalid credentials. Consider 'kinit'. javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)] at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211) at org.apache.hadoop.hbase.security.HBaseSaslRpcClient.saslConnect(HBaseSaslRpcClient.java:179) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupSaslConnection(RpcClientImpl.java:617) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.access$700(RpcClientImpl.java:162) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:743) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection$2.run(RpcClientImpl.java:740) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.setupIOstreams(RpcClientImpl.java:740) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.writeRequest(RpcClientImpl.java:906) at org.apache.hadoop.hbase.ipc.RpcClientImpl$Connection.tracedWriteRequest(RpcClientImpl.java:873) at org.apache.hadoop.hbase.ipc.RpcClientImpl.call(RpcClientImpl.java:1242) at org.apache.hadoop.hbase.ipc.AbstractRpcClient.callBlockingMethod(AbstractRpcClient.java:226) at org.apache.hadoop.hbase.ipc.AbstractRpcClient$BlockingRpcChannelImplementation.callBlockingMethod(AbstractRpcClient.java:331) at org.apache.hadoop.hbase.protobuf.generated.ClientProtos$ClientService$BlockingStub.scan(ClientProtos.java:34094) at org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:201) at org.apache.hadoop.hbase.client.ClientSmallScanner$SmallScannerCallable.call(ClientSmallScanner.java:180) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithoutRetries(RpcRetryingCaller.java:200) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:360) at org.apache.hadoop.hbase.client.ScannerCallableWithReplicas$RetryingRPC.call(ScannerCallableWithReplicas.java:334) at org.apache.hadoop.hbase.client.RpcRetryingCaller.callWithRetries(RpcRetryingCaller.java:126) at org.apache.hadoop.hbase.client.ResultBoundedCompletionService$QueueingFuture.run(ResultBoundedCompletionService.java:65) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt) at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:147) at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122) at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:187) at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212) at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179) at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192) ... 25 more

I ran it on yarn-cluster. It seems key is inavalid. I have tried changing permissions of keytab file but no luck.

I had done kinit and it is working fine for the same keytab

avatar
Master Mentor

@priyanshu hasija

Try the below steps

$ klist -kt /etc/security/keytabs/hbase.service.keytab
Keytab name: FILE:/etc/security/keytabs/hbase.service.keytab
KVNO Timestamp         Principal
---- ----------------- --------------------------------------------------------
   1 02/02/17 23:00:12 hbase/FQDN@EXAMPLE.COM
   1 02/02/17 23:00:12 hbase/FQDN@EXAMPLE.COM
   1 02/02/17 23:00:12 hbase/FQDN@EXAMPLE.COM
   1 02/02/17 23:00:12 hbase/FQDN@EXAMPLE.COM
   1 02/02/17 23:00:12 hbase/FQDN@EXAMPLE.COM

Then from the above do this

$ kinit -kt /etc/security/keytabs/hbase.service.keytab  hbase/FQDN@EXAMPLE.COM

Now run your job and let me know

avatar
Contributor

@Geoferry

I have already done the following steps and then ran the job. But havent succeeded.

avatar
Contributor

I have done some changes in the code and here i am getting stuck:

1) driver log:

17/06/02 16:25:33 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:25:37 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:25:38 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:25:43 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:25:46 WARN RpcClientImpl: Couldn't setup connection for zookeeper/localhost@EXAMPLE.COM to zookeeper/localhost@EXAMPLE.COM
17/06/02 16:25:49 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:25:52 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:25:56 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:25:57 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:01 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:04 WARN RpcClientImpl: Couldn't setup connection for zookeeper/localhost@EXAMPLE.COM to zookeeper/localhost@EXAMPLE.COM
17/06/02 16:26:08 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:10 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:14 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:16 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:20 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:25 WARN RpcClientImpl: Couldn't setup connection for zookeeper/localhost@EXAMPLE.COM to zookeeper/localhost@EXAMPLE.COM
17/06/02 16:26:27 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:28 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:32 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:35 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:39 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:42 WARN RpcClientImpl: Couldn't setup connection for zookeeper/localhost@EXAMPLE.COM to zookeeper/localhost@EXAMPLE.COM
17/06/02 16:26:44 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:49 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:52 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:57 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.
17/06/02 16:26:58 WARN UserGroupInformation: Not attempting to re-login since the last re-login was attempted less than 600 seconds before.

2) Master log:

]
2017-06-02 15:58:01,638 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys/11 , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,649 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys/12 , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,654 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys/13 , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,663 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys/14 , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,671 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys/15 , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,679 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys/16 , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,688 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys/10 , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,696 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth/keys , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,704 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/tokenauth , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,712 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/draining , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,721 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/namespace/my_ns , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,750 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/namespace/default , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,771 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/namespace/NS , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,779 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/namespace/hbase , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,787 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/namespace , acl:[31,s{'auth,'}
]
2017-06-02 15:58:01,796 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/hbaseid , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]
2017-06-02 15:58:01,804 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/table/my_ns:alarm_audit , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]
2017-06-02 15:58:01,812 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/table/hbase:meta , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]
2017-06-02 15:58:01,821 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/table/my_ns:unharnonized_alarm_audit , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]
2017-06-02 15:58:01,829 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/table/hbase:namespace , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]
2017-06-02 15:58:01,838 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/table/my_ns:admin_audit , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]
2017-06-02 15:58:01,846 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase/table , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]
2017-06-02 15:58:01,854 INFO  [infoobjects-Latitude-3550:16020.activeMasterManager] zookeeper.ZooKeeperWatcher: Setting ACLs for znode:/hbase , acl:[31,s{'auth,'}
, 1,s{'world,'anyone}
]

3) RegionServer log:

2017-06-02 15:58:01,069 INFO  [RS_OPEN_REGION-infoobjects-Latitude-3550:16201-1] regionserver.HRegion: Onlined 18164fdd3694952b1527fb91c4724198; next sequenceid=67
2017-06-02 15:58:01,070 INFO  [PostOpenDeployTasks:18164fdd3694952b1527fb91c4724198] regionserver.HRegionServer: Post open deploy tasks for hbase:namespace,,1490169022164.18164fdd3694952b1527fb91c4724198.
2017-06-02 15:58:01,167 INFO  [PostOpenDeployTasks:43f9faceac3db2010e6070bec70200c0] hbase.MetaTableAccessor: Updated row my_ns:unharnonized_alarm_audit,,1493121241792.43f9faceac3db2010e6070bec70200c0. with server=infoobjects-latitude-3550,16201,1496399273491
2017-06-02 15:58:01,167 INFO  [PostOpenDeployTasks:18164fdd3694952b1527fb91c4724198] hbase.MetaTableAccessor: Updated row hbase:namespace,,1490169022164.18164fdd3694952b1527fb91c4724198. with server=infoobjects-latitude-3550,16201,1496399273491
2017-06-02 15:58:01,167 INFO  [PostOpenDeployTasks:40c95c7accfb12179ed5b65207eee989] hbase.MetaTableAccessor: Updated row my_ns:admin_audit,,1494314936756.40c95c7accfb12179ed5b65207eee989. with server=infoobjects-latitude-3550,16201,1496399273491
2017-06-02 15:58:01,225 INFO  [StoreOpener-09be4fb5ee3fe8ac146a63600d5d5006-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=3, currentSize=1721776, freeSize=1657859536, maxSize=1659581312, heapSize=1721776, minSize=1576602240, minFactor=0.95, multiSize=788301120, multiFactor=0.5, singleSize=394150560, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2017-06-02 15:58:01,225 INFO  [StoreOpener-09be4fb5ee3fe8ac146a63600d5d5006-1] compactions.CompactionConfiguration: size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2017-06-02 15:58:01,283 INFO  [StoreOpener-09be4fb5ee3fe8ac146a63600d5d5006-1] hfile.CacheConfig: blockCache=LruBlockCache{blockCount=3, currentSize=1721776, freeSize=1657859536, maxSize=1659581312, heapSize=1721776, minSize=1576602240, minFactor=0.95, multiSize=788301120, multiFactor=0.5, singleSize=394150560, singleFactor=0.25}, cacheDataOnRead=true, cacheDataOnWrite=false, cacheIndexesOnWrite=false, cacheBloomsOnWrite=false, cacheEvictOnClose=false, cacheDataCompressed=false, prefetchOnOpen=false
2017-06-02 15:58:01,283 INFO  [StoreOpener-09be4fb5ee3fe8ac146a63600d5d5006-1] compactions.CompactionConfiguration: size [134217728, 9223372036854775807); files [3, 10); ratio 1.200000; off-peak ratio 5.000000; throttle point 2684354560; major period 604800000, major jitter 0.500000, min locality to compact 0.000000
2017-06-02 15:58:01,298 INFO  [RS_OPEN_REGION-infoobjects-Latitude-3550:16201-2] regionserver.HRegion: Onlined 09be4fb5ee3fe8ac146a63600d5d5006; next sequenceid=107
2017-06-02 15:58:01,299 INFO  [PostOpenDeployTasks:09be4fb5ee3fe8ac146a63600d5d5006] regionserver.HRegionServer: Post open deploy tasks for my_ns:alarm_audit,,1494845175221.09be4fb5ee3fe8ac146a63600d5d5006.
2017-06-02 15:58:01,305 INFO  [PostOpenDeployTasks:09be4fb5ee3fe8ac146a63600d5d5006] hbase.MetaTableAccessor: Updated row my_ns:alarm_audit,,1494845175221.09be4fb5ee3fe8ac146a63600d5d5006. with server=infoobjects-latitude-3550,16201,1496399273491
2017-06-02 16:02:56,017 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.54 GB, max=1.55 GB, blockCount=7, accesses=50, hits=43, hitRatio=86.00%, , cachingAccesses=50, cachingHits=43, cachingHitsRatio=86.00%, evictions=29, evicted=0, evictedPerRun=0.0
2017-06-02 16:07:56,016 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.54 GB, max=1.55 GB, blockCount=7, accesses=53, hits=46, hitRatio=86.79%, , cachingAccesses=53, cachingHits=46, cachingHitsRatio=86.79%, evictions=59, evicted=0, evictedPerRun=0.0
2017-06-02 16:12:56,016 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.54 GB, max=1.55 GB, blockCount=7, accesses=56, hits=49, hitRatio=87.50%, , cachingAccesses=56, cachingHits=49, cachingHitsRatio=87.50%, evictions=89, evicted=0, evictedPerRun=0.0
2017-06-02 16:17:56,016 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.54 GB, max=1.55 GB, blockCount=7, accesses=59, hits=52, hitRatio=88.14%, , cachingAccesses=59, cachingHits=52, cachingHitsRatio=88.14%, evictions=119, evicted=0, evictedPerRun=0.0
2017-06-02 16:22:56,016 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.54 GB, max=1.55 GB, blockCount=7, accesses=62, hits=55, hitRatio=88.71%, , cachingAccesses=62, cachingHits=55, cachingHitsRatio=88.71%, evictions=149, evicted=0, evictedPerRun=0.0
2017-06-02 16:27:56,015 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.54 GB, max=1.55 GB, blockCount=7, accesses=65, hits=58, hitRatio=89.23%, , cachingAccesses=65, cachingHits=58, cachingHitsRatio=89.23%, evictions=179, evicted=0, evictedPerRun=0.0
2017-06-02 16:32:56,015 INFO  [LruBlockCacheStatsExecutor] hfile.LruBlockCache: totalSize=1.67 MB, freeSize=1.54 GB, max=1.55 GB, blockCount=7, accesses=75, hits=68, hitRatio=90.67%, , cachingAccesses=75, cachingHits=68, cachingHitsRatio=90.67%, evictions=209, evicted=0, evictedPerRun=0.0

My spark code:

def getHbaseConnection(properties: SerializedProperties): Connection = {
    var connection: Connection = null
    val HBASE_ZOOKEEPER_QUORUM_VALUE = properties.zkQuorum
    val config = HBaseConfiguration.create();
    config.set("hbase.zookeeper.quorum", HBASE_ZOOKEEPER_QUORUM_VALUE);
    config.set("hbase.zookeeper.property.clientPort",
      properties.zkPort);
    if (properties.hbaseAuth != null
      && properties.hbaseAuth
      .equalsIgnoreCase("kerberos")) {
      config.set("hadoop.security.authentication", "kerberos");
      config.set("hbase.security.authentication", "kerberos");
      config.set("hbase.cluster.distributed", "true");
      config.set("hbase.rpc.protection", "privacy");
      config.set("hbase.client.retries.number", "5");
      config.set("hbase.regionserver.kerberos.principal", properties.kerberosRegion);
      config.set("hbase.master.kerberos.principal", properties.kerberosMaster);
      UserGroupInformation.setConfiguration(config);
      // Every time, complete the TGT check first
      var loginUser = UserGroupInformation.getLoginUser();
      if (loginUser != null) {
        loginUser.checkTGTAndReloginFromKeytab();
      } else {
        if (SparkFiles.get(properties.keytab) != null
          && (new java.io.File(SparkFiles.get(properties.keytab)).exists)) {
          loginUser = UserGroupInformation.loginUserFromKeytabAndReturnUGI(properties.kerberosPrincipal,
            SparkFiles.get(properties.keytab));
        } else {
          loginUser = UserGroupInformation.loginUserFromKeytabAndReturnUGI(properties.kerberosPrincipal,
            properties.keytab);
        }
      }
        loginUser.doAs(new PrivilegedExceptionAction[Void]() { 
   override def run(): Void = { 
    connection=ConnectionFactory.createConnection(config);
    return null 
   } 
  });
        println("getuser.,......"+loginUser.getUserName)
  }
    return connection;
  }

Config:

hbase.zookeeper.quorum=localhost
hbase.zookeeper.property.clientPort=2181
hadoop.security.authentication=kerberos
hbase.security.authentication=kerberos
hbase.master.kerberos.principal=zookeeper/localhost@EXAMPLE.COM
hbase.regionserver.kerberos.principal=zookeeper/localhost@EXAMPLE.COM
hbase.kerberos.principal=admin@EXAMPLE.COM
hbase.kerberos.keytab=zkpr2.keytab

Jass File:

Client {
  com.sun.security.auth.module.Krb5LoginModule required
  useKeyTab=true
  doNotPrompt=true
  keyTab="/home/priyanshu/git/rwuptime/rockwell-services/audit-spark-services/zkpr2.keytab"
  useTicketCache=false
  renewTicket=true
  debug=true
  storeKey=true
  principal="admin@EXAMPLE.COM"
  client=true
  serviceName="zookeeper";
};