Member since
07-24-2017
42
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8156 | 12-21-2018 02:30 PM | |
2642 | 11-23-2018 09:06 AM |
12-21-2018
02:30 PM
All, Thanks for your response. I found the root cause of the issue. Ambari was using its master's key in KDC admin credentials that is why it was giving "Missing KDC administrator credentials. Please enter admin principal and password". So I have removed that crendential file (PFA for this) and issue has been solved. For others, you may need to keep ambari master key and KDC admin creds same, because that file is required at the time of ambari-server restart (if you have configured jceks). PFA, kerberos-admin-creds-issue-solved.png
... View more
12-21-2018
07:09 AM
All, Thanks for your response. I found the root cause of this issue in my case, Ambari was using Ambari master key for KDC admin credentials which was present at /var/lib/ambari-server/keys/credentials.jceks. I have taken backup of it and was able to work on 'Enable kerberos through Ambari UI'. But that previous file is required at the time of ambari-server restart. So need to keep ambari-master key same as KDC admin key (password). PFA, kerberos-admin-creds-issue-solved.png
... View more
12-20-2018
04:39 AM
@Geoffrey Shelton Okot PFA for services restart, services-restart.png
... View more
12-18-2018
04:10 AM
Ok. You can check files which I have already attached in above comments. Absolute path of files are /etc/hosts, /etc/krb5.conf, /etc/krbkdc/kadm5.acl, /etc/krb5kdc/kdc.conf PFA, krb5conf.png kdcconf.png kadm5conf.png hosts.png
... View more
12-13-2018
11:13 AM
@Geoffrey Shelton Okot Apart from the above can you share a tokenized version of your the below files- Sorry I did not get what you have asked. klist -V Kerberos 5 version 1.13.2 And KDC server's hostname -f output is --> ubuntu19.example.com Check attached files of KDC server, krb5conf.pngkdcconf.pngkadm5conf.pnghosts.png
... View more
12-13-2018
04:09 AM
@Robert Levas It is showing output as expected PFA, keytool-output.png
... View more
12-12-2018
01:14 PM
@Robert Levas I have checked ambari-server.log file at that time and made password store persistent by executing below command and from Ambari UI, I was able to check on save password box, curl -H "X-Requested-By:ambari" -u admin:admin -X PUT -d '{ "Credential" : { "principal" : "kadmin/admin@EXAMPLE.COM", "key" : "123456", "type" : "persisted" } }' http://ambari-server-host-ip:8080/api/v1/clusters/Ambari_PreDev/credentials/kdc.admin.credential But still Ambari UI is giving exception as missing credentials and not able to kerberize cluster. Also my last admin principal created is admin/admin@EXAMPLE.COM and changed password is password. PFA, ambari-server-logs.png
... View more
12-11-2018
01:00 PM
@Sampath Kumar I have already tried above steps and again tried the same as you said from reinstalltion of kerberos but facing the same issue.
... View more
12-11-2018
12:14 PM
I have setup kerberos and enabled in Ambari successfully on one environment but while trying the same on another environment I am facing issue while enabling kerberos. I have tried to store credentials using keytool, Rest API, checked kerberos descriptors but no luck. what else is left to check? PFA, missing-kdc-credentials.png
... View more
Labels:
- Labels:
-
Apache Ambari
12-11-2018
10:30 AM
Did you mean that credentials are stored ambari server's in memory and as I got this exception should wait for next 90 minutes to try again?
... View more
12-10-2018
12:03 PM
@Jay Kumar SenSharma I have tried by clearing browser cookies (chrome/firefox), checked by storing KDC credentials through keytool and rest api as well but no luck. I have done setup on one environment and it is done successful but now facing issue on another environment. PFA, missing-kdc-credentials.png
... View more
11-23-2018
09:06 AM
While regenerating principals it was giving above error because it might be taking that principal name from Ambari database - Postgres
... View more
11-23-2018
06:10 AM
Do we have to manually add actual hostname in place of _HOST here configuration.set("hbase.regionserver.kerberos.principal","hbase/_HOST@FIELD.HORTONWORKS.COM");
... View more
11-23-2018
05:07 AM
While regenerating principals it was giving above error because it might be taking that principal name from Ambari database - Postgres
... View more
11-22-2018
02:50 AM
Yes I had customized zookeeper and hbase principals in Kerberos configuration through Ambari but later I changed it to default and trying to regenerate principals but it is giving above error. From where is it taking these principals though I have destroyed Kerberos database? Any solution?
... View more
11-21-2018
05:41 PM
I have destroyed kerberos database and created new, still getting above error.
... View more
11-21-2018
05:41 PM
I have kerberos and Ambari setup and I was able to enable/disable kerberos through ambari and was able to regenerate principals but now I am getting below error on Ambari UI,
2018-11-21 04:01:14,662 - Failed to create principal, zookeeper/local4.domain.coma@DOMAIN.COM,zookeeper/ubuntu25.domain.com@DOMAIN.COM,zookeeper/ubuntu26.domain.com@DOMAIN.COM - Failed to create service principal for zookeeper/local4.domain.com@DOMAIN.COM,zookeeper/ubuntu25.domain.com@DOMAIN.COM,zookeeper/ubuntu26.domain.com@DOMAIN.COM
STDOUT: Authenticating as principal kadmin/admin@DOMAIN.COM with existing credentials.
STDERR: add_principal: Malformed representation of principal while parsing principal
usage: add_principal [options] principal
options are:
[-randkey|-nokey] [-x db_princ_args]* [-expire expdate] [-pwexpire pwexpdate] [-maxlife maxtixlife]
[-kvno kvno] [-policy policy] [-clearpolicy]
[-pw password] [-maxrenewlife maxrenewlife]
[-e keysaltlist]
[{+|-}attribute]
attributes are:
allow_postdated allow_forwardable allow_tgs_req allow_renewable
allow_proxiable allow_dup_skey allow_tix requires_preauth
requires_hwauth needchange allow_svr password_changing_service
ok_as_delegate ok_to_auth_as_delegate no_auth_data_required
where,
[-x db_princ_args]* - any number of database specific arguments.
Look at each database documentation for supported arguments
Administration credentials NOT DESTROYED.
2018-11-21 04:01:16,073 - Failed to create principal, hbase/local4.domain.com@DOMAIN.COM,hbase/ubuntu25.domain.com@DOMAIN.COM,hbase/ubuntu26.domain.com@DOMAIN.COM - Failed to create service principal for hbase/local4.domain.com@DOMAIN.COM,hbase/ubuntu25.domain.com@DOMAIN.COM,hbase/ubuntu26.domain.com@DOMAIN.COM
STDOUT: Authenticating as principal kadmin/admin@DOMAIN.COM with existing credentials.
STDERR: add_principal: Malformed representation of principal while parsing principal
usage: add_principal [options] principal
options are:
[-randkey|-nokey] [-x db_princ_args]* [-expire expdate] [-pwexpire pwexpdate] [-maxlife maxtixlife]
[-kvno kvno] [-policy policy] [-clearpolicy]
[-pw password] [-maxrenewlife maxrenewlife]
[-e keysaltlist]
[{+|-}attribute]
attributes are:
allow_postdated allow_forwardable allow_tgs_req allow_renewable
allow_proxiable allow_dup_skey allow_tix requires_preauth
requires_hwauth needchange allow_svr password_changing_service
ok_as_delegate ok_to_auth_as_delegate no_auth_data_required
where,
[-x db_princ_args]* - any number of database specific arguments.
Look at each database documentation for supported arguments
Administration credentials NOT DESTROYED.
Can anyone check?
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache HBase
09-26-2018
09:26 AM
@Jay Kumar SenSharma any solution to this?
... View more
09-17-2018
10:41 AM
Below is the list of kafka-acls on test topic, Krb5Context.wrap: token=[05 04 00 ff 00 0c 00 00 00 00 00 00 27 d9 c3 5b 01 01 00 00 6b 61 66 6b 61 2f 75 62 75 6e 74 75 32 36 2e 6d 73 74 6f 72 6d 2e 63 6f 6d 40 4d 53 54 4f 52 4d 2e 43 4f 4d 9c d0 cc bf 71 74 35 93 38 71 59 a0 ]
Current ACLs for resource `Topic:test`:
user:root has Allow permission for operations: Read from hosts: kafka1.example.com
user:deepak has Allow permission for operations: Read from hosts: kafka1.example.com
user:root has Allow permission for operations: Write from hosts: kafka1.example.com
user:deepak has Allow permission for operations: Write from hosts: kafka1.example.com
... View more
09-16-2018
05:50 PM
@Jay Kumar SenSharma thanks for the response. kafka.acl are set properly for producer and consumer for test topic still giving "UNKNOWN_TOPIC_OR_PARTITION"
... View more
09-15-2018
08:53 PM
I am getting below error on producer while producing message, Krb5Context.wrap: token=[05 04 00 ff 00 0c 00 00 00 00 00 00 18 86 c2 46 01 01 00 00 6b 61 66 6b 61 2d 63 6c 69 65 6e 74 2f 75 62 75 6e 74 75 32 36 2e 6d 73 74 6f 72 6d 2e 63 6f 6d 40 4d 53 54 4f 52 4d 2e 43 4f 4d 46 80 3d 15 92 45 c2 58 cd 12 11 76 ] [2018-09-14 11:12:39,775] WARN Error while fetching metadata with correlation id 1 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:39,874] WARN Error while fetching metadata with correlation id 2 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:39,978] WARN Error while fetching metadata with correlation id 3 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:40,082] WARN Error while fetching metadata with correlation id 4 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:40,186] WARN Error while fetching metadata with correlation id 5 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:40,290] WARN Error while fetching metadata with correlation id 6 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:40,394] WARN Error while fetching metadata with correlation id 7 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) ^C[2018-09-14 11:12:40,409] WARN [Principal=kafka-client/kafka1.example.com@EXAMPLE.COMa]: TGT renewal thread has been interrupted and will exit. (org.apache.kafka.common.security.kerberos.KerberosLogin)
... View more
Labels:
09-14-2018
07:31 PM
@Jay Kumar SelenSharma thanks for the response, this issue was solved. I am getting below error on producer while producing message, Krb5Context.wrap: token=[05 04 00 ff 00 0c 00 00 00 00 00 00 18 86 c2 46 01 01 00 00 6b 61 66 6b 61 2d 63 6c 69 65 6e 74 2f 75 62 75 6e 74 75 32 36 2e 6d 73 74 6f 72 6d 2e 63 6f 6d 40 4d 53 54 4f 52 4d 2e 43 4f 4d 46 80 3d 15 92 45 c2 58 cd 12 11 76 ] [2018-09-14 11:12:39,775] WARN Error while fetching metadata with correlation id 1 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:39,874] WARN Error while fetching metadata with correlation id 2 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:39,978] WARN Error while fetching metadata with correlation id 3 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:40,082] WARN Error while fetching metadata with correlation id 4 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:40,186] WARN Error while fetching metadata with correlation id 5 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:40,290] WARN Error while fetching metadata with correlation id 6 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:40,394] WARN Error while fetching metadata with correlation id 7 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) ^C[2018-09-14 11:12:40,409] WARN [Principal=kafka-client/kafka1.example.com@EXAMPLE.COMa]: TGT renewal thread has been interrupted and will exit. (org.apache.kafka.common.security.kerberos.KerberosLogin)
... View more
09-11-2018
12:06 PM
@Jay Kumar SenSharma thanks for the response. I had provided ssl.truststore.password.generator but not ssl.truststore.password. Now I have added ssl.truststore.password kafka has started but not able to produce messages, giving error as below, [2018-09-11 01:21:52,015] ERROR Error when sending message to topic test with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
[2018-09-11 01:22:52,020] ERROR Error when sending message to topic test with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
... View more
09-11-2018
12:06 PM
Will use keytab Commit Succeeded
a s d f
[2018-09-11 03:08:16,347] ERROR Error when sending message to topic test with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
[2018-09-11 03:09:16,349] ERROR Error when sending message to topic test with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
[2018-09-11 03:10:16,349] ERROR Error when sending message to topic test with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
a
s
ddd
ffffffffffffffffffff
gggggggggggggggggggggggg
[2018-09-11 03:11:16,350] ERROR Error when sending message to topic test with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
[2018-09-11 03:12:16,351] ERROR Error when sending message to topic test with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
[2018-09-11 03:13:16,352] ERROR Error when sending message to topic test with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
[2018-09-11 03:14:16,352] ERROR Error when sending message to topic test with key: null, value: 3 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
[2018-09-11 03:15:16,353] ERROR Error when sending message to topic test with key: null, value: 20 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
[2018-09-11 03:16:16,353] ERROR Error when sending message to topic test with key: null, value: 24 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
... View more
Labels:
09-11-2018
05:02 AM
[2018-09-11 00:06:58,404] INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin)
[2018-09-11 00:06:58,409] INFO [Principal=kafka/kafka1.example.com@EXAMPLE.COM]: TGT refresh thread started. (org.apache.kafka.common.security.kerberos.KerberosLogin)
[2018-09-11 00:06:58,409] INFO [Principal=kafka/kafka1.example.com@EXAMPLE.COM]: TGT valid starting at: Tue Sep 11 00:06:58 EDT 2018 (org.apache.kafka.common.security.kerberos.KerberosLogin)
[2018-09-11 00:06:58,409] INFO [Principal=kafka/kafka1.example.com@EXAMPLE.COM]: TGT expires: Tue Sep 11 10:06:58 EDT 2018 (org.apache.kafka.common.security.kerberos.KerberosLogin)
[2018-09-11 00:06:58,410] INFO [Principal=kafka/kafka1.example.com@EXAMPLE.COM]: TGT refresh sleeping until: Tue Sep 11 08:23:38 EDT 2018 (org.apache.kafka.common.security.kerberos.KerberosLogin)
[2018-09-11 00:06:58,411] FATAL [Kafka Server 1], Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: SSL trust store is specified, but trust store password is not specified.
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:94)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:93)
at org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:63)
at kafka.network.Processor.<init>(SocketServer.scala:422)
at kafka.network.SocketServer.newProcessor(SocketServer.scala:155)
at kafka.network.SocketServer.$anonfun$startup$2(SocketServer.scala:96)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:156)
at kafka.network.SocketServer.$anonfun$startup$1(SocketServer.scala:95)
at kafka.network.SocketServer.$anonfun$startup$1$adapted(SocketServer.scala:90)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at kafka.network.SocketServer.startup(SocketServer.scala:90)
at kafka.server.KafkaServer.startup(KafkaServer.scala:215)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
Caused by: org.apache.kafka.common.KafkaException: SSL trust store is specified, but trust store password is not specified.
at org.apache.kafka.common.security.ssl.SslFactory.createTruststore(SslFactory.java:195)
at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:115)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:91)
... 16 more
[2018-09-11 00:06:58,416] INFO [Kafka Server 1], shutting down (kafka.server.KafkaServer)
[2018-09-11 00:06:58,421] INFO [Socket Server on Broker 1], Shutting down (kafka.network.SocketServer)
[2018-09-11 00:06:58,426] WARN (kafka.utils.CoreUtils$)
java.lang.NullPointerException
at kafka.network.SocketServer.$anonfun$shutdown$3(SocketServer.scala:129)
at kafka.network.SocketServer.$anonfun$shutdown$3$adapted(SocketServer.scala:129)
at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:32)
at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:29)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:193)
at kafka.network.SocketServer.shutdown(SocketServer.scala:129)
at kafka.server.KafkaServer.$anonfun$shutdown$3(KafkaServer.scala:582)
at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:78)
at kafka.utils.Logging.swallowWarn(Logging.scala:94)
at kafka.utils.Logging.swallowWarn$(Logging.scala:93)
at kafka.utils.CoreUtils$.swallowWarn(CoreUtils.scala:48)
at kafka.utils.Logging.swallow(Logging.scala:96)
at kafka.utils.Logging.swallow$(Logging.scala:96)
at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:48)
at kafka.server.KafkaServer.shutdown(KafkaServer.scala:582)
at kafka.server.KafkaServer.startup(KafkaServer.scala:289)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
[2018-09-11 00:06:58,433] INFO Shutting down. (kafka.log.LogManager)
[2018-09-11 00:06:58,448] INFO Shutdown complete. (kafka.log.LogManager)
[2018-09-11 00:06:58,448] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2018-09-11 00:06:58,555] INFO Session: 0x165b374ac140029 closed (org.apache.zookeeper.ZooKeeper)
[2018-09-11 00:06:58,555] INFO EventThread shut down for session: 0x165b374ac140029 (org.apache.zookeeper.ClientCnxn)
[2018-09-11 00:06:58,562] INFO [Kafka Server 1], shut down completed (kafka.server.KafkaServer)
[2018-09-11 00:06:58,564] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: SSL trust store is specified, but trust store password is not specified.
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:94)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:93)
at org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:63)
at kafka.network.Processor.<init>(SocketServer.scala:422)
at kafka.network.SocketServer.newProcessor(SocketServer.scala:155)
at kafka.network.SocketServer.$anonfun$startup$2(SocketServer.scala:96)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:156)
at kafka.network.SocketServer.$anonfun$startup$1(SocketServer.scala:95)
at kafka.network.SocketServer.$anonfun$startup$1$adapted(SocketServer.scala:90)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at kafka.network.SocketServer.startup(SocketServer.scala:90)
at kafka.server.KafkaServer.startup(KafkaServer.scala:215)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
Caused by: org.apache.kafka.common.KafkaException: SSL trust store is specified, but trust store password is not specified.
at org.apache.kafka.common.security.ssl.SslFactory.createTruststore(SslFactory.java:195)
at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:115)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:91)
... 16 more
... View more
Labels:
- Labels:
-
Apache Kafka
09-07-2018
01:13 PM
This issue has solved.
... View more
09-07-2018
01:13 PM
I have configured zookeeper and kafka broker for kerberos authentication and both are started but kafka broker have error as below, [2018-09-07 04:40:33,105] ERROR Invalid ACL (kafka.utils.ZKCheckedEphemeral)
[2018-09-07 04:40:33,106] INFO Result of znode creation is: INVALIDACL (kafka.utils.ZKCheckedEphemeral)
[2018-09-07 04:40:33,107] ERROR Error while electing or becoming leader on broker 1 (kafka.server.ZookeeperLeaderElector)
org.I0Itec.zkclient.exception.ZkException: org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL
at org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:68)
at kafka.utils.ZKCheckedEphemeral.create(ZkUtils.scala:1124)
at kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:82)
at kafka.server.ZookeeperLeaderElector.$anonfun$startup$1(ZookeeperLeaderElector.scala:51)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:12)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213)
at kafka.server.ZookeeperLeaderElector.startup(ZookeeperLeaderElector.scala:49)
at kafka.controller.KafkaController.$anonfun$startup$1(KafkaController.scala:681)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213)
at kafka.controller.KafkaController.startup(KafkaController.scala:677)
at kafka.server.KafkaServer.startup(KafkaServer.scala:224)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
Caused by: org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL
at org.apache.zookeeper.KeeperException.create(KeeperException.java:121)
... 14 more
... View more
Labels:
- Labels:
-
Apache Kafka
09-06-2018
12:42 PM
@Geoffrey Shelton Okot thanks for the response. Kafka has started but its logs has error. PFA kafka-zookeeper-kerberos-invalidacl.txt file,
... View more
09-05-2018
05:02 AM
below is the output of klist, klist -ket /etc/kafka/kafka.keytab Keytab name: FILE:/etc/kafka/kafka.keytab KVNO Timestamp Principal
2 09/04/2018 07:52:11 kafka@MSTORM (aes256-cts-hmac-sha1-96) 2 09/04/2018 07:52:11 kafka@MSTORM (aes128-cts-hmac-sha1-96) 2 09/04/2018 07:52:11 kafka@MSTORM (des3-cbc-sha1) 2 09/04/2018 07:52:11 kafka@MSTORM (arcfour-hmac) klist -ket /etc/kafka/zookeeper.keytab Keytab name: FILE:/etc/kafka/zookeeper.keytab KVNO Timestamp Principal
2 09/04/2018 07:51:50 zookeeper@MSTORM (aes256-cts-hmac-sha1-96) 2 09/04/2018 07:51:50 zookeeper@MSTORM (aes128-cts-hmac-sha1-96) 2 09/04/2018 07:51:50 zookeeper@MSTORM (des3-cbc-sha1) 2 09/04/2018 07:51:50 zookeeper@MSTORM (arcfour-hmac) klist -ket /etc/kafka/kafka-client.keytab Keytab name: FILE:/etc/kafka/kafka-client.keytab KVNO Timestamp Principal
2 09/04/2018 07:52:31 kafka-client@MSTORM (aes256-cts-hmac-sha1-96) 2 09/04/2018 07:52:31 kafka-client@MSTORM (aes128-cts-hmac-sha1-96) 2 09/04/2018 07:52:31 kafka-client@MSTORM (des3-cbc-sha1) 2 09/04/2018 07:52:31 kafka-client@MSTORM (arcfour-hmac)
... View more
09-04-2018
03:19 PM
I have configured zookeeper for kerberos and it has started but after configuring kafka for kerberos authentication, broker is not starting, giving below error, ./kafka-server-start.sh ../config/server.properties
[2018-09-04 08:14:50,014] INFO KafkaConfig values:
advertised.host.name = null
advertised.listeners = SASL_PLAINTEXT://kafkaBrokerIP:9092
advertised.port = null
authorizer.class.name =
auto.create.topics.enable = true
auto.leader.rebalance.enable = true
background.threads = 10
broker.id = 1
broker.id.generation.enable = true
broker.rack = null
compression.type = gzip
connections.max.idle.ms = 600000
controlled.shutdown.enable = true
controlled.shutdown.max.retries = 3
controlled.shutdown.retry.backoff.ms = 5000
controller.socket.timeout.ms = 30000
create.topic.policy.class.name = null
default.replication.factor = 2
delete.topic.enable = false
fetch.purgatory.purge.interval.requests = 1000
group.max.session.timeout.ms = 300000
group.min.session.timeout.ms = 6000
host.name = kafkaBrokerIP
inter.broker.listener.name = null
inter.broker.protocol.version = 0.10.2-IV0
leader.imbalance.check.interval.seconds = 300
leader.imbalance.per.broker.percentage = 10
listener.security.protocol.map = SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,TRACE:TRACE,SASL_SSL:SASL_SSL,PLAINTEXT:PLAINTEXT
listeners = SASL_PLAINTEXT://kafkaBrokerIP:9092
log.cleaner.backoff.ms = 15000
log.cleaner.dedupe.buffer.size = 134217728
log.cleaner.delete.retention.ms = 86400000
log.cleaner.enable = true
log.cleaner.io.buffer.load.factor = 0.9
log.cleaner.io.buffer.size = 524288
log.cleaner.io.max.bytes.per.second = 1.7976931348623157E308
log.cleaner.min.cleanable.ratio = 0.5
log.cleaner.min.compaction.lag.ms = 0
log.cleaner.threads = 1
log.cleanup.policy = [delete]
log.dir = /tmp/kafka-logs
log.dirs = /home/deepak/kafka/kafka-logs
log.flush.interval.messages = 9223372036854775807
log.flush.interval.ms = null
log.flush.offset.checkpoint.interval.ms = 60000
log.flush.scheduler.interval.ms = 9223372036854775807
log.index.interval.bytes = 4096
log.index.size.max.bytes = 10485760
log.message.format.version = 0.10.2-IV0
log.message.timestamp.difference.max.ms = 9223372036854775807
log.message.timestamp.type = CreateTime
log.preallocate = false
log.retention.bytes = -1
log.retention.check.interval.ms = 300000
log.retention.hours = 168
log.retention.minutes = null
log.retention.ms = null
log.roll.hours = 168
log.roll.jitter.hours = 0
log.roll.jitter.ms = null
log.roll.ms = null
log.segment.bytes = 1073741824
log.segment.delete.delay.ms = 60000
max.connections.per.ip = 2147483647
max.connections.per.ip.overrides =
message.max.bytes = 40000000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
min.insync.replicas = 1
num.io.threads = 8
num.network.threads = 9
num.partitions = 2
num.recovery.threads.per.data.dir = 1
num.replica.fetchers = 1
offset.metadata.max.bytes = 4096
offsets.commit.required.acks = -1
offsets.commit.timeout.ms = 5000
offsets.load.buffer.size = 5242880
offsets.retention.check.interval.ms = 600000
offsets.retention.minutes = 1440
offsets.topic.compression.codec = 0
offsets.topic.num.partitions = 50
offsets.topic.replication.factor = 3
offsets.topic.segment.bytes = 104857600
port = 9092
principal.builder.class = class org.apache.kafka.common.security.auth.DefaultPrincipalBuilder
producer.purgatory.purge.interval.requests = 1000
queued.max.requests = 500
quota.consumer.default = 9223372036854775807
quota.producer.default = 9223372036854775807
quota.window.num = 11
quota.window.size.seconds = 1
replica.fetch.backoff.ms = 1000
replica.fetch.max.bytes = 104857600
replica.fetch.min.bytes = 1
replica.fetch.response.max.bytes = 10485760
replica.fetch.wait.max.ms = 500
replica.high.watermark.checkpoint.interval.ms = 5000
replica.lag.time.max.ms = 10000
replica.socket.receive.buffer.bytes = 65536
replica.socket.timeout.ms = 30000
replication.quota.window.num = 11
replication.quota.window.size.seconds = 1
request.timeout.ms = 30000
reserved.broker.max.id = 1000
sasl.enabled.mechanisms = [GSSAPI]
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.principal.to.local.rules = [DEFAULT]
sasl.kerberos.service.name = kafka
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism.inter.broker.protocol = GSSAPI
security.inter.broker.protocol = SASL_PLAINTEXT
socket.receive.buffer.bytes = 102400
socket.request.max.bytes = 104857600
socket.send.buffer.bytes = 102400
ssl.cipher.suites = null
ssl.client.auth = none
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
unclean.leader.election.enable = true
zookeeper.connect = kafkaBrokerIP:2182
zookeeper.connection.timeout.ms = 30000
zookeeper.session.timeout.ms = 30000
zookeeper.set.acl = false
zookeeper.sync.time.ms = 2000
(kafka.server.KafkaConfig)
[2018-09-04 08:14:50,058] INFO starting (kafka.server.KafkaServer)
[2018-09-04 08:14:50,059] INFO Connecting to zookeeper on kafkaBrokerIP:2182 (kafka.server.KafkaServer)
[2018-09-04 08:14:50,068] INFO JAAS File name: /home/deepak/kafka/kafka_jaas.conf (org.I0Itec.zkclient.ZkClient)
[2018-09-04 08:14:50,069] INFO Starting ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2018-09-04 08:14:50,072] INFO Client environment:zookeeper.version=3.4.9-1757313, built on 08/23/2016 06:50 GMT (org.apache.zookeeper.ZooKeeper)
[2018-09-04 08:14:50,072] INFO Client environment:host.name=ubuntu26.mstorm.com (org.apache.zookeeper.ZooKeeper)
[2018-09-04 08:14:50,072] INFO Client environment:java.version=1.8.0_171 (org.apache.zookeeper.ZooKeeper)
[2018-09-04 08:14:50,072] INFO Client environment:java.vendor=Oracle Corporation (org.apache.zookeeper.ZooKeeper)
[2018-09-04 08:14:50,072] INFO Client environment:java.home=/usr/lib/jvm/java-8-openjdk-amd64/jre (org.apache.zookeeper.ZooKeeper)
[2018-09-04 08:14:50,072] INFO Client environment:java.class.path=:/home/deepak/kafka/bin/../libs/aopalliance-repackaged-2.5.0-b05.jar:/home/deepak/kafka/bin/../libs/argparse4j-0.7.0.jar:/home/deepak/kafka/bin/../libs/connect-api-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/connect-file-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/connect-json-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/connect-runtime-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/connect-transforms-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/guava-18.0.jar:/home/deepak/kafka/bin/../libs/hk2-api-2.5.0-b05.jar:/home/deepak/kafka/bin/../libs/hk2-locator-2.5.0-b05.jar:/home/deepak/kafka/bin/../libs/hk2-utils-2.5.0-b05.jar:/home/deepak/kafka/bin/../libs/jackson-annotations-2.8.0.jar:/home/deepak/kafka/bin/../libs/jackson-annotations-2.8.5.jar:/home/deepak/kafka/bin/../libs/jackson-core-2.8.5.jar:/home/deepak/kafka/bin/../libs/jackson-databind-2.8.5.jar:/home/deepak/kafka/bin/../libs/jackson-jaxrs-base-2.8.5.jar:/home/deepak/kafka/bin/../libs/jackson-jaxrs-json-provider-2.8.5.jar:/home/deepak/kafka/bin/../libs/jackson-module-jaxb-annotations-2.8.5.jar:/home/deepak/kafka/bin/../libs/javassist-3.20.0-GA.jar:/home/deepak/kafka/bin/../libs/javax.annotation-api-1.2.jar:/home/deepak/kafka/bin/../libs/javax.inject-1.jar:/home/deepak/kafka/bin/../libs/javax.inject-2.5.0-b05.jar:/home/deepak/kafka/bin/../libs/javax.servlet-api-3.1.0.jar:/home/deepak/kafka/bin/../libs/javax.ws.rs-api-2.0.1.jar:/home/deepak/kafka/bin/../libs/jersey-client-2.24.jar:/home/deepak/kafka/bin/../libs/jersey-common-2.24.jar:/home/deepak/kafka/bin/../libs/jersey-container-servlet-2.24.jar:/home/deepak/kafka/bin/../libs/jersey-container-servlet-core-2.24.jar:/home/deepak/kafka/bin/../libs/jersey-guava-2.24.jar:/home/deepak/kafka/bin/../libs/jersey-media-jaxb-2.24.jar:/home/deepak/kafka/bin/../libs/jersey-server-2.24.jar:/home/deepak/kafka/bin/../libs/jetty-continuation-9.2.15.v20160210.jar:/home/deepak/kafka/bin/../libs/jetty-http-9.2.15.v20160210.jar:/home/deepak/kafka/bin/../libs/jetty-io-9.2.15.v20160210.jar:/home/deepak/kafka/bin/../libs/jetty-security-9.2.15.v20160210.jar:/home/deepak/kafka/bin/../libs/jetty-server-9.2.15.v20160210.jar:/home/deepak/kafka/bin/../libs/jetty-servlet-9.2.15.v20160210.jar:/home/deepak/kafka/bin/../libs/jetty-servlets-9.2.15.v20160210.jar:/home/deepak/kafka/bin/../libs/jetty-util-9.2.15.v20160210.jar:/home/deepak/kafka/bin/../libs/jopt-simple-5.0.3.jar:/home/deepak/kafka/bin/../libs/kafka_2.12-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/kafka_2.12-0.10.2.0-sources.jar:/home/deepak/kafka/bin/../libs/kafka_2.12-0.10.2.0-test-sources.jar:/home/deepak/kafka/bin/../libs/kafka-clients-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/kafka-log4j-appender-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/kafka-streams-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/kafka-streams-examples-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/kafka-tools-0.10.2.0.jar:/home/deepak/kafka/bin/../libs/log4j-1.2.17.jar:/home/deepak/kafka/bin/../libs/lz4-1.3.0.jar:/home/deepak/kafka/bin/../libs/metrics-core-2.2.0.jar:/home/deepak/kafka/bin/../libs/osgi-resource-locator-1.0.1.jar:/home/deepak/kafka/bin/../libs/reflections-0.9.10.jar:/home/deepak/kafka/bin/../libs/rocksdbjni-5.0.1.jar:/home/deepak/kafka/bin/../libs/scala-library-2.12.1.jar:/home/deepak/kafka/bin/../libs/scala-parser-combinators_2.12-1.0.4.jar:/home/deepak/kafka/bin/../libs/slf4j-api-1.7.21.jar:/home/deepak/kafka/bin/../libs/slf4j-log4j12-1.7.21.jar:/home/deepak/kafka/bin/../libs/snappy-java-1.1.2.6.jar:/home/deepak/kafka/bin/../libs/validation-api-1.1.0.Final.jar:/home/deepak/kafka/bin/../libs/zkclient-0.10.jar:/home/deepak/kafka/bin/../libs/zookeeper-3.4.9.jar (org.apache.zookeeper.ZooKeeper)
[2018-09-04 08:14:50,073] INFO Client environment:java.library.path=/usr/java/packages/lib/amd64:/usr/lib/x86_64-linux-gnu/jni:/lib/x86_64-linux-gnu:/usr/lib/x86_64-linux-gnu:/usr/lib/jni:/lib:/usr/lib (org.apache.zookeeper.ZooKeeper)
[2018-09-04 08:14:50,073] INFO Client environment:java.io.tmpdir=/tmp (org.apache.zookeeper.ZooKeeper)
[2018-09-04 08:14:50,073] INFO Client environment:java.compiler=<NA> (org.apache.zookeeper.ZooKeeper)
[2018-09-04 08:14:50,073] INFO Client environment:os.name=Linux (org.apache.zookeeper.ZooKeeper)
[2018-09-04 08:14:50,073] INFO Client environment:os.arch=amd64 (org.apache.zookeeper.ZooKeeper)
[2018-09-04 08:14:50,073] INFO Client environment:os.version=4.4.0-128-generic (org.apache.zookeeper.ZooKeeper)
[2018-09-04 08:14:50,073] INFO Client environment:user.name=root (org.apache.zookeeper.ZooKeeper)
[2018-09-04 08:14:50,073] INFO Client environment:user.home=/root (org.apache.zookeeper.ZooKeeper)
[2018-09-04 08:14:50,073] INFO Client environment:user.dir=/home/deepak/kafka/bin (org.apache.zookeeper.ZooKeeper)
[2018-09-04 08:14:50,073] INFO Initiating client connection, connectString=kafkaBrokerIP:2182 sessionTimeout=30000 watcher=org.I0Itec.zkclient.ZkClient@56de5251 (org.apache.zookeeper.ZooKeeper)
[2018-09-04 08:14:50,084] INFO Waiting for keeper state SaslAuthenticated (org.I0Itec.zkclient.ZkClient)
Debug is true storeKey true useTicketCache false useKeyTab true doNotPrompt false ticketCache is null isInitiator true KeyTab is /etc/kafka/kafka.keytab refreshKrb5Config is false principal is kafka@MSTORM.COM tryFirstPass is false useFirstPass is false storePass is false clearPass is false
>>> KeyTabInputStream, readName(): MSTORM
>>> KeyTabInputStream, readName(): kafka
>>> KeyTab: load() entry length: 62; type: 18
>>> KeyTabInputStream, readName(): MSTORM
>>> KeyTabInputStream, readName(): kafka
>>> KeyTab: load() entry length: 46; type: 17
>>> KeyTabInputStream, readName(): MSTORM
>>> KeyTabInputStream, readName(): kafka
>>> KeyTab: load() entry length: 54; type: 16
>>> KeyTabInputStream, readName(): MSTORM
>>> KeyTabInputStream, readName(): kafka
>>> KeyTab: load() entry length: 46; type: 23
Looking for keys for: kafka@MSTORM.COM
Key for the principal kafka@MSTORM.COM not available in /etc/kafka/kafka.keytab
[2018-09-04 08:14:50,088] WARN Could not login: the client is being asked for a password, but the Zookeeper client code does not currently support obtaining a password from the user. Make sure that the client is configured to use a ticket cache (using the JAAS configuration setting 'useTicketCache=true)' and restart the client. If you still get this message after that, the TGT in the ticket cache has expired and must be manually refreshed. To do so, first determine if you are using a password or a keytab. If the former, run kinit in a Unix shell in the environment of the user who is running this Zookeeper client using the command 'kinit <princ>' (where <princ> is the name of the client's Kerberos principal). If the latter, do 'kinit -k -t <keytab> <princ>' (where <princ> is the name of the Kerberos principal, and <keytab> is the location of the keytab file). After manually refreshing your cache, restart this client. If you continue to see this message after manually refreshing your cache, ensure that your KDC host's clock is in sync with this host's clock. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[Krb5LoginModule] authentication failed
No password provided
[2018-09-04 08:14:50,089] WARN SASL configuration failed: javax.security.auth.login.LoginException: No password provided Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn)
[2018-09-04 08:14:50,090] INFO Opening socket connection to server kafkaBrokerIP/kafkaBrokerIP:2182 (org.apache.zookeeper.ClientCnxn)
[2018-09-04 08:14:50,090] INFO zookeeper state changed (AuthFailed) (org.I0Itec.zkclient.ZkClient)
[2018-09-04 08:14:50,090] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2018-09-04 08:14:50,093] INFO Socket connection established to kafkaBrokerIP/kafkaBrokerIP:2182, initiating session (org.apache.zookeeper.ClientCnxn)
[2018-09-04 08:14:50,114] INFO Session establishment complete on server kafkaBrokerIP/kafkaBrokerIP:2182, sessionid = 0x165a472b05e0002, negotiated timeout = 30000 (org.apache.zookeeper.ClientCnxn)
[2018-09-04 08:14:50,120] INFO Session: 0x165a472b05e0002 closed (org.apache.zookeeper.ZooKeeper)
[2018-09-04 08:14:50,121] INFO EventThread shut down for session: 0x165a472b05e0002 (org.apache.zookeeper.ClientCnxn)
[2018-09-04 08:14:50,121] FATAL Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.I0Itec.zkclient.exception.ZkAuthFailedException: Authentication failure
at org.I0Itec.zkclient.ZkClient.waitForKeeperState(ZkClient.java:947)
at org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:924)
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1231)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:157)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:131)
at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:79)
at kafka.utils.ZkUtils$.apply(ZkUtils.scala:61)
at kafka.server.KafkaServer.initZk(KafkaServer.scala:329)
at kafka.server.KafkaServer.startup(KafkaServer.scala:187)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
[2018-09-04 08:14:50,123] INFO shutting down (kafka.server.KafkaServer)
[2018-09-04 08:14:50,127] INFO shut down completed (kafka.server.KafkaServer)
[2018-09-04 08:14:50,128] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
org.I0Itec.zkclient.exception.ZkAuthFailedException: Authentication failure
at org.I0Itec.zkclient.ZkClient.waitForKeeperState(ZkClient.java:947)
at org.I0Itec.zkclient.ZkClient.waitUntilConnected(ZkClient.java:924)
at org.I0Itec.zkclient.ZkClient.connect(ZkClient.java:1231)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:157)
at org.I0Itec.zkclient.ZkClient.<init>(ZkClient.java:131)
at kafka.utils.ZkUtils$.createZkClientAndConnection(ZkUtils.scala:79)
at kafka.utils.ZkUtils$.apply(ZkUtils.scala:61)
at kafka.server.KafkaServer.initZk(KafkaServer.scala:329)
at kafka.server.KafkaServer.startup(KafkaServer.scala:187)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
[2018-09-04 08:14:50,129] INFO shutting down (kafka.server.KafkaServer)
root@ubuntu26:/home/deepak/kafka/bin# vim ../zookeeper_jaas.conf
root@ubuntu26:/home/deepak/kafka/bin# vim ../kafka_jaas.conf
root@ubuntu26:/home/deepak/kafka/bin# kinit kafka@MSTORM.COM
kinit: Cannot find KDC for realm "MSTORM.COM" while getting initial credentials
root@ubuntu26:/home/deepak/kafka/bin# kinit -k -t /etc/kafka/kafka.keytab kafka@MSTORM.COM
kinit: Keytab contains no suitable keys for kafka@MSTORM.COM while getting initial credentials Can you check what is the issue here?
... View more
Labels:
- Labels:
-
Apache Kafka