Support Questions

Find answers, ask questions, and share your expertise

Impala Catalog Server not starting after enabling Kerberos

avatar
Explorer

Hello all,

 

I've recently decided to enable Kerberos on all the service which are getting used in my company. I've successfully enabled Kerberos on Zookeeper, Kafka, Hadoop, Hbase.

When I'm trying to enable Kerberos on Hive-metatore and Impala I'm getting following error:

I've followed the following guides:

CDH Impala Kerberos 

CDH Hiveserver2 Security 

CDH Hive Metastore Security 

 

hive-metastore.log

 

ERROR [pool-4-thread-3]: server.TThreadPoolServer (TThreadPoolServer.java:run(297)) - Error occurred during processing of message.
java.lang.RuntimeException: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:794)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory$1.run(HadoopThriftAuthBridge.java:791)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:360)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1900)
        at org.apache.hadoop.hive.thrift.HadoopThriftAuthBridge$Server$TUGIAssumingTransportFactory.getTransport(HadoopThriftAuthBridge.java:791)
        at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.thrift.transport.TTransportException: Peer indicated failure: GSS initiate failed
        at org.apache.thrift.transport.TSaslTransport.receiveSaslMessage(TSaslTransport.java:199)
        at org.apache.thrift.transport.TSaslServerTransport.handleSaslStartMessage(TSaslServerTransport.java:125)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
        at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41)
        at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216)
        ... 10 more
	

 

 

 catalogd.ERROR

 

E1029 17:31:06.843065 30770 TSaslTransport.java:296] SASL negotiation failure
Java exception follows:
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
        at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
        at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:253)
        at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:52)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport$1.run(TUGIAssumingTransport.java:49)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
        at org.apache.hadoop.hive.thrift.client.TUGIAssumingTransport.open(TUGIAssumingTransport.java:49)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:464)
        at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:244)
        at sun.reflect.GeneratedConstructorAccessor8.newInstance(Unknown Source)
        at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
        at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
        at org.apache.hadoop.hive.metastore.MetaStoreUtils.newInstance(MetaStoreUtils.java:1560)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.<init>(RetryingMetaStoreClient.java:67)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:82)
        at org.apache.hadoop.hive.metastore.RetryingMetaStoreClient.getProxy(RetryingMetaStoreClient.java:73)
        at org.apache.impala.catalog.MetaStoreClientPool$MetaStoreClient.<init>(MetaStoreClientPool.java:93)
        at org.apache.impala.catalog.MetaStoreClientPool$MetaStoreClient.<init>(MetaStoreClientPool.java:72)
        at org.apache.impala.catalog.MetaStoreClientPool.initClients(MetaStoreClientPool.java:168)
        at org.apache.impala.catalog.Catalog.<init>(Catalog.java:103)
        at org.apache.impala.catalog.CatalogServiceCatalog.<init>(CatalogServiceCatalog.java:163)
        at org.apache.impala.service.JniCatalog.<init>(JniCatalog.java:106)
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)
        at sun.security.jgss.krb5.Krb5InitCredential.getInstance(Krb5InitCredential.java:162)
        at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:122)
        at sun.security.jgss.krb5.Krb5MechFactory.getMechanismContext(Krb5MechFactory.java:189)
        at sun.security.jgss.GSSManagerImpl.getMechanismContext(GSSManagerImpl.java:224)
        at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:212)
        at sun.security.jgss.GSSContextImpl.initSecContext(GSSContextImpl.java:179)
        at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:192)
        ... 24 more
W1029 17:31:06.843554 30770 HiveMetaStoreClient.java:474] Failed to connect to the MetaStore Server...
W1029 17:31:07.844949 30770 MetaStoreClientPool.java:101] Failed to connect to Hive MetaStore. Retrying.

 

 

hive-site.xml

 

<property>
  <name>hive.server2.authentication</name>
  <value>KERBEROS</value>
</property>
<property>
  <name>hive.server2.authentication.kerberos.principal</name>
  <value>hive/local9@FBSPL.COM</value>
</property>
<property>
  <name>hive.server2.authentication.kerberos.keytab</name>
  <value>/etc/hive/conf/hive.keytab</value>
</property>
<property>
  <name>hive.metastore.sasl.enabled</name>
  <value>true</value>
</property>
<property>
  <name>hive.metastore.kerberos.keytab.file</name>
  <value>/etc/hive/conf/hive.keytab</value>
</property>
<property>
  <name>hive.metastore.kerberos.principal</name>
  <value>hive/local9@FBSPL.COM</value>
</property>
<property>
  <name>hive.server2.enable.impersonation</name>
  <description>Enable user impersonation for HiveServer2</description>
  <value>true</value>
</property>

 

 

/etc/defaults/impala

 

IMPALA_STATE_STORE_ARGS=" -log_dir=${IMPALA_LOG_DIR} \
    -kerberos_reinit_interval=60 \
    -principal=impala/local9@FBSPL.COM \
    -keytab_file=/etc/impala/conf/impala-http.keytab \
    -state_store_port=${IMPALA_STATE_STORE_PORT}"
IMPALA_SERVER_ARGS=" \
    -log_dir=${IMPALA_LOG_DIR} \
    -catalog_service_host=${IMPALA_CATALOG_SERVICE_HOST} \
    -state_store_port=${IMPALA_STATE_STORE_PORT} \
    -use_statestore \
    -state_store_host=${IMPALA_STATE_STORE_HOST} \
    -kerberos_reinit_interval=60 \
    -principal=impala/local9@FBSPL.COM \
    -keytab_file=/etc/impala/conf/impala-http.keytab \
    -be_port=${IMPALA_BACKEND_PORT}"

 

 

Keytab File permission and ownership

 

-r--r-----. 1 hive hadoop 146 Oct 29 12:36 /etc/hive/conf/hive.keytab
-r--------. 1 impala impala 294 Oct 28 19:22 /etc/impala/conf/impala-http.keytab

 

 

Keytab Principals:

 

klist -e -t -k  /etc/hive/conf/hive.keytab
Keytab name: FILE:/etc/hive/conf/hive.keytab
KVNO Timestamp           Principal
---- ------------------- ------------------------------------------------------
   1 10/29/2020 12:36:48 hive/local9@FBSPL.COM (aes256-cts-hmac-sha1-96)
   1 10/29/2020 12:36:48 hive/local9@FBSPL.COM (aes128-cts-hmac-sha1-96)

 

 

 

klist -e -t -k  /etc/impala/conf/impala-http.keytab
Keytab name: FILE:/etc/impala/conf/impala-http.keytab
KVNO Timestamp           Principal
---- ------------------- ------------------------------------------------------
   2 10/28/2020 19:21:47 impala/local9@FBSPL.COM (aes256-cts-hmac-sha1-96)
   2 10/28/2020 19:21:47 impala/local9@FBSPL.COM (aes128-cts-hmac-sha1-96)
   2 10/28/2020 19:21:47 HTTP/local9@FBSPL.COM (aes256-cts-hmac-sha1-96)
   2 10/28/2020 19:21:47 HTTP/local9@FBSPL.COM (aes128-cts-hmac-sha1-96)

 

 

Any Help would be greatly appreciated. 

1 ACCEPTED SOLUTION

avatar
Explorer

Thanks for the reply.

 

I found the issue, the Kerberos setup was fine the only thing missing was, providing the Kerberos Principal and Keytab path to the IMPALA_CATALOG_ARGS.

In the CDH documentation ( CDH Impala Kerberos  Point 7)

7. Add Kerberos options to the Impala defaults file, /etc/default/impala. Add the options for both the impalad and statestored daemons, using the IMPALA_SERVER_ARGS and IMPALA_STATE_STORE_ARGS variables

that I followed they have only mentioned to update IMPALA_STATE_STORE_ARGS and IMPALA_SERVER_ARGS,  that's why catalog server was not authenticating with Kerberos. After adding the the Kerberos Principal and keytab path I was able to start the without any issues.

View solution in original post

4 REPLIES 4

avatar
Expert Contributor

@sace17 

 

Your catalog server is unable to authenticate or get a kerberos ticket. Generally the ticket will be read form the impala.keytab file present under catalog process directory.

 

/var/run/cloudera-scm-agent/process/<latest-process-number>-impala-CATALOGSERVER/implala.keytab

 

for example, below is the keytab output from my catalog server. It contains principal of my catalog server.

 

[root@host-10-17-102-166 6772-impala-CATALOGSERVER]# klist -ket /var/run/cloudera-scm-agent/process/6772-impala-CATALOGSERVER/impala.keytab

Keytab name: FILE:/var/run/cloudera-scm-agent/process/6772-impala-CATALOGSERVER/impala.keytab

KVNO Timestamp         Principal

---- ----------------- --------------------------------------------------------

   5 08/11/20 22:46:07 impala/host-10-17-102-166.coe.cloudera.com@COE.CLOUDERA.COM (des3-cbc-sha1)

 

Can you see if you have the same on your end ?

 

You can also verify by looking at the catalog server logs, when it boots.

 

I1029 07:09:27.356434 18103 authentication.cc:730] Using internal kerberos principal "impala/email@redacted.host"

I1029 07:09:27.356523 18103 authentication.cc:1083] Internal communication is authenticated with Kerberos

I1029 07:09:27.360359 18103 init.cc:362] Logged in from keytab as impala/email@redacted.host (short username impala)

I1029 07:09:27.360714 18103 authentication.cc:866] Kerberos ticket granted to impala/email@redacted.host

I1029 07:09:27.360739 18103 authentication.cc:730] Using external kerberos principal "impala/email@redacted.host"

I1029 07:09:27.360744 18103 authentication.cc:1099] External communication is authenticated with Kerberos

 

Use the below link to setup kerberos manually. to ensure if all the process was followed correctly or not.

 

https://plenium.wordpress.com/2018/07/17/kerberos-setup-in-cloudera-hadoop/

avatar
Explorer

Hi,

 

Above I've already listed all principals which are present in impala.keytab

 

klist -e -t -k  /etc/impala/conf/impala-http.keytab
Keytab name: FILE:/etc/impala/conf/impala-http.keytab
KVNO Timestamp           Principal
---- ------------------- ------------------------------------------------------
   2 10/28/2020 19:21:47 impala/local9@FBSPL.COM (aes256-cts-hmac-sha1-96)
   2 10/28/2020 19:21:47 impala/local9@FBSPL.COM (aes128-cts-hmac-sha1-96)
   2 10/28/2020 19:21:47 HTTP/local9@FBSPL.COM (aes256-cts-hmac-sha1-96)
   2 10/28/2020 19:21:47 HTTP/local9@FBSPL.COM (aes128-cts-hmac-sha1-96)

 

 

Output from catalog.INFO

Log file created at: 2020/10/29 17:29:11
Running on machine: local9
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I1029 17:29:11.247231 30770 logging.cc:120] stdout will be logged to this file.
E1029 17:29:11.247392 30770 logging.cc:121] stderr will be logged to this file.
I1029 17:29:11.247609 30770 minidump.cc:231] Setting minidump size limit to 20971520.
I1029 17:29:11.249019 30770 authentication.cc:1093] Internal communication is not authenticated
I1029 17:29:11.249027 30770 authentication.cc:1114] External communication is not authenticated
I1029 17:29:11.249331 30770 init.cc:224] catalogd version 2.11.0-cdh5.14.2 RELEASE (build ed85dce709da9557aeb28be89e8044947708876c)
Built on Tue Mar 27 13:39:48 PDT 2018
I1029 17:29:11.249336 30770 init.cc:225] Using hostname: local9
I1029 17:29:11.249737 30770 logging.cc:156] Flags (see also /varz are on debug webserver):

 

Is this okay?

avatar
Expert Contributor

@sace17 

 

No this is not right.

 

As I mentioned, the Catalog server will be using the keytab file (impala.keytab) present inside it's process directory

 

ssh to catalog server and run the below command to list out the principals from the keytab

 

klist -ket $(ls -td /var/run/cloudera-scm-agent/process/*CATALOGSERVER* | head -1)/impala.keytab

 

Also From the logs, I see the catalog server is not using authentication. Hence I would request you to focus on setting up the kerberos properly.

 

I1029 17:29:11.249019 30770 authentication.cc:1093] Internal communication is not authenticated
I1029 17:29:11.249027 30770 authentication.cc:1114] External communication is not authenticated

 

avatar
Explorer

Thanks for the reply.

 

I found the issue, the Kerberos setup was fine the only thing missing was, providing the Kerberos Principal and Keytab path to the IMPALA_CATALOG_ARGS.

In the CDH documentation ( CDH Impala Kerberos  Point 7)

7. Add Kerberos options to the Impala defaults file, /etc/default/impala. Add the options for both the impalad and statestored daemons, using the IMPALA_SERVER_ARGS and IMPALA_STATE_STORE_ARGS variables

that I followed they have only mentioned to update IMPALA_STATE_STORE_ARGS and IMPALA_SERVER_ARGS,  that's why catalog server was not authenticating with Kerberos. After adding the the Kerberos Principal and keytab path I was able to start the without any issues.