Member since
07-24-2017
42
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4168 | 12-21-2018 02:30 PM | |
1126 | 11-23-2018 09:06 AM |
12-21-2018
02:30 PM
All, Thanks for your response. I found the root cause of the issue. Ambari was using its master's key in KDC admin credentials that is why it was giving "Missing KDC administrator credentials. Please enter admin principal and password". So I have removed that crendential file (PFA for this) and issue has been solved. For others, you may need to keep ambari master key and KDC admin creds same, because that file is required at the time of ambari-server restart (if you have configured jceks). PFA, kerberos-admin-creds-issue-solved.png
... View more
12-21-2018
07:09 AM
All, Thanks for your response. I found the root cause of this issue in my case, Ambari was using Ambari master key for KDC admin credentials which was present at /var/lib/ambari-server/keys/credentials.jceks. I have taken backup of it and was able to work on 'Enable kerberos through Ambari UI'. But that previous file is required at the time of ambari-server restart. So need to keep ambari-master key same as KDC admin key (password). PFA, kerberos-admin-creds-issue-solved.png
... View more
12-20-2018
04:39 AM
@Geoffrey Shelton Okot PFA for services restart, services-restart.png
... View more
12-18-2018
04:10 AM
Ok. You can check files which I have already attached in above comments. Absolute path of files are /etc/hosts, /etc/krb5.conf, /etc/krbkdc/kadm5.acl, /etc/krb5kdc/kdc.conf PFA, krb5conf.png kdcconf.png kadm5conf.png hosts.png
... View more
12-13-2018
11:13 AM
@Geoffrey Shelton Okot Apart from the above can you share a tokenized version of your the below files- Sorry I did not get what you have asked. klist -V Kerberos 5 version 1.13.2 And KDC server's hostname -f output is --> ubuntu19.example.com Check attached files of KDC server, krb5conf.pngkdcconf.pngkadm5conf.pnghosts.png
... View more
12-13-2018
04:09 AM
@Robert Levas It is showing output as expected PFA, keytool-output.png
... View more
12-12-2018
01:14 PM
@Robert Levas I have checked ambari-server.log file at that time and made password store persistent by executing below command and from Ambari UI, I was able to check on save password box, curl -H "X-Requested-By:ambari" -u admin:admin -X PUT -d '{ "Credential" : { "principal" : "kadmin/admin@EXAMPLE.COM", "key" : "123456", "type" : "persisted" } }' http://ambari-server-host-ip:8080/api/v1/clusters/Ambari_PreDev/credentials/kdc.admin.credential But still Ambari UI is giving exception as missing credentials and not able to kerberize cluster. Also my last admin principal created is admin/admin@EXAMPLE.COM and changed password is password. PFA, ambari-server-logs.png
... View more
12-11-2018
01:00 PM
@Sampath Kumar I have already tried above steps and again tried the same as you said from reinstalltion of kerberos but facing the same issue.
... View more
12-11-2018
12:14 PM
I have setup kerberos and enabled in Ambari successfully on one environment but while trying the same on another environment I am facing issue while enabling kerberos. I have tried to store credentials using keytool, Rest API, checked kerberos descriptors but no luck. what else is left to check? PFA, missing-kdc-credentials.png
... View more
Labels:
12-11-2018
10:30 AM
Did you mean that credentials are stored ambari server's in memory and as I got this exception should wait for next 90 minutes to try again?
... View more
12-10-2018
12:03 PM
@Jay Kumar SenSharma I have tried by clearing browser cookies (chrome/firefox), checked by storing KDC credentials through keytool and rest api as well but no luck. I have done setup on one environment and it is done successful but now facing issue on another environment. PFA, missing-kdc-credentials.png
... View more
12-05-2018
03:18 PM
Datanodes are not able to communicate with namenode after enabling kerberos in Ambari UI I have checked clusterID of all datanodes and namenode are identical. datanode logs, 2018-12-05 16:22:07,147 WARN datanode.DataNode (BPOfferService.java:getBlockPoolId(213)) - Block pool ID needed, but service not yet registered with NN, trace:
java.lang.Exception
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:213)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.getBlockPoolId(BPOfferService.java:224)
at org.apache.hadoop.hdfs.server.datanode.DataNode.getNamenodeAddresses(DataNode.java:3095)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:193)
at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:175)
at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:117)
at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:54)
at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
at com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
at com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
at com.sun.jmx.mbeanserver.JmxMBeanServer.getAttribute(JmxMBeanServer.java:678)
at org.apache.hadoop.jmx.JMXJsonServlet.writeAttribute(JMXJsonServlet.java:338)
at org.apache.hadoop.jmx.JMXJsonServlet.listBeans(JMXJsonServlet.java:316)
at org.apache.hadoop.jmx.JMXJsonServlet.doGet(JMXJsonServlet.java:210)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:687)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:790)
at org.eclipse.jetty.servlet.ServletHolder.handle(ServletHolder.java:848)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1772)
at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:644)
at org.apache.hadoop.security.authentication.server.AuthenticationFilter.doFilter(AuthenticationFilter.java:592)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at org.apache.hadoop.http.HttpServer2$QuotingInputFilter.doFilter(HttpServer2.java:1604)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at org.apache.hadoop.http.NoCacheFilter.doFilter(NoCacheFilter.java:45)
at org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1759)
at org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:582)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:143)
at org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:548)
at org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:226)
at org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1180)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:512)
at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
at org.eclipse.jetty.server.Server.handle(Server.java:534)
at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:320)
at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
at java.lang.Thread.run(Thread.java:748) namenode logs, construction: 0
2018-12-05 15:33:57,348 INFO block.BlockTokenSecretManager (BlockTokenSecretManager.java:updateKeys(240)) - Updating block keys
2018-12-05 15:33:57,351 INFO hdfs.StateChange (BlockManagerSafeMode.java:reportStatus(602)) - STATE* Safe mode ON.
The reported blocks 0 needs additional 3979 blocks to reach the threshold 1.0000 of total blocks 3979.
The number of live datanodes 0 has reached the minimum number 0. Safe mode will be turned off automatically once the thresholds have been reached.
2018-12-05 15:33:57,386 INFO ipc.Server (Server.java:run(1314)) - IPC Server Responder: starting
2018-12-05 15:33:57,386 INFO ipc.Server (Server.java:run(1153)) - IPC Server listener on 8020: starting
2018-12-05 15:33:57,436 INFO ipc.Server (Server.java:doRead(1256)) - Socket Reader #1 for port 8020: readAndProcess from client 10.13.10.23:34368 threw exception [org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]]
org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]
at org.apache.hadoop.ipc.Server$Connection.initializeAuthContext(Server.java:2136)
at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:2085)
at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:1249)
at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:1105)
at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:1076)
2018-12-05 15:33:57,466 INFO ipc.Server (Server.java:doRead(1256)) - Socket Reader #1 for port 8020: readAndProcess from client 10.13.10.23:34376 threw exception [org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]]
org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]
at org.apache.hadoop.ipc.Server$Connection.initializeAuthContext(Server.java:2136)
at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:2085)
at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:1249)
at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:1105)
at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:1076)
2018-12-05 15:33:57,481 INFO namenode.NameNode (NameNode.java:startCommonServices(812)) - NameNode RPC up at: ubuntu19.mcloud.com/10.13.10.19:8020
2018-12-05 15:33:57,483 INFO namenode.FSNamesystem (FSNamesystem.java:startActiveServices(1207)) - Starting services required for active state
2018-12-05 15:33:57,483 INFO namenode.FSDirectory (FSDirectory.java:updateCountForQuota(767)) - Initializing quota with 4 thread(s)
2018-12-05 15:33:57,693 INFO namenode.FSDirectory (FSDirectory.java:updateCountForQuota(776)) - Quota initialization completed in 208 milliseconds
name space=36151
storage space=161822274807
storage types=RAM_DISK=0, SSD=0, DISK=3874952379, ARCHIVE=0, PROVIDED=0
2018-12-05 15:33:57,701 INFO blockmanagement.CacheReplicationMonitor (CacheReplicationMonitor.java:run(160)) - Starting CacheReplicationMonitor with interval 30000 milliseconds
2018-12-05 15:33:57,733 INFO ipc.Server (Server.java:authorizeConnection(2562)) - Connection from 10.13.10.22:36473 for protocol org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol is unauthorized for user dn/ubuntu22.mcloud.com@MCLOUD.COM (auth:KERBEROS)
2018-12-05 15:33:57,738 INFO ipc.Server (Server.java:authorizeConnection(2562)) - Connection from 10.13.10.24:46794 for protocol org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol is unauthorized for user dn/ubuntu24.mcloud.com@MCLOUD.COM (auth:KERBEROS)
2018-12-05 15:33:57,739 INFO ipc.Server (Server.java:authorizeConnection(2562)) - Connection from 10.13.10.21:33234 for protocol org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol is unauthorized for user dn/ubuntu21.mcloud.com@MCLOUD.COM (auth:KERBEROS)
2018-12-05 15:33:57,800 INFO ipc.Server (Server.java:authorizeConnection(2562)) - Connection from 10.13.10.20:33021 for protocol org.apache.hadoop.hdfs.server.protocol.DatanodeProtocol is unauthorized for user dn/ubuntu20.mcloud.com@MCLOUD.COM (auth:KERBEROS)
2018-12-05 15:33:57,976 INFO ipc.Server (Server.java:logException(2726)) - IPC Server handler 0 on 8020, call Call#0 Retry#0 org.apache.hadoop.hdfs.protocol.ClientProtocol.setSafeMode from 10.13.10.19:51576
org.apache.hadoop.ipc.RetriableException: NameNode still not started
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.checkNNStartup(NameNodeRpcServer.java:2210)
at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.setSafeMode(NameNodeRpcServer.java:1223)
at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.setSafeMode(ClientNamenodeProtocolServerSideTranslatorPB.java:846)
at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:524)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1025)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:876)
at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:822)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1730)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2682)
2018-12-05 15:33:58,127 INFO ipc.Server (Server.java:doRead(1256)) - Socket Reader #1 for port 8020: readAndProcess from client 10.13.10.23:34380 threw exception [org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]]
org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]
at org.apache.hadoop.ipc.Server$Connection.initializeAuthContext(Server.java:2136)
at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:2085)
at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:1249)
at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:1105)
at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:1076)
2018-12-05 15:33:58,171 INFO ipc.Server (Server.java:doRead(1256)) - Socket Reader #1 for port 8020: readAndProcess from client 10.13.10.23:34382 threw exception [org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]]
org.apache.hadoop.security.AccessControlException: SIMPLE authentication is not enabled. Available:[TOKEN, KERBEROS]
at org.apache.hadoop.ipc.Server$Connection.initializeAuthContext(Server.java:2136)
at org.apache.hadoop.ipc.Server$Connection.readAndProcess(Server.java:2085)
at org.apache.hadoop.ipc.Server$Listener.doRead(Server.java:1249)
at org.apache.hadoop.ipc.Server$Listener$Reader.doRunLoop(Server.java:1105)
at org.apache.hadoop.ipc.Server$Listener$Reader.run(Server.java:1076)
2018-12-05 15:33:58,375 INFO fs.TrashPolicyDefault (TrashPolicyDefault.java:<init>(228)) - The configured checkpoint interval is 0 minutes. Using an interval of 360 minutes that is used for deletion instead
2018-12-05 15:33:58,375 INFO fs.TrashPolicyDefault (TrashPolicyDefault.java:<init>(235)) - Namenode trash configuration: Deletion interval = 360 minutes, Emptier interval = 0 minutes.
... View more
Labels:
11-23-2018
09:06 AM
While regenerating principals it was giving above error because it might be taking that principal name from Ambari database - Postgres
... View more
11-23-2018
06:10 AM
Do we have to manually add actual hostname in place of _HOST here configuration.set("hbase.regionserver.kerberos.principal","hbase/_HOST@FIELD.HORTONWORKS.COM");
... View more
11-23-2018
05:07 AM
While regenerating principals it was giving above error because it might be taking that principal name from Ambari database - Postgres
... View more
11-22-2018
02:50 AM
Yes I had customized zookeeper and hbase principals in Kerberos configuration through Ambari but later I changed it to default and trying to regenerate principals but it is giving above error. From where is it taking these principals though I have destroyed Kerberos database? Any solution?
... View more
11-21-2018
05:41 PM
I have destroyed kerberos database and created new, still getting above error.
... View more
11-21-2018
05:41 PM
I have kerberos and Ambari setup and I was able to enable/disable kerberos through ambari and was able to regenerate principals but now I am getting below error on Ambari UI,
2018-11-21 04:01:14,662 - Failed to create principal, zookeeper/local4.domain.coma@DOMAIN.COM,zookeeper/ubuntu25.domain.com@DOMAIN.COM,zookeeper/ubuntu26.domain.com@DOMAIN.COM - Failed to create service principal for zookeeper/local4.domain.com@DOMAIN.COM,zookeeper/ubuntu25.domain.com@DOMAIN.COM,zookeeper/ubuntu26.domain.com@DOMAIN.COM
STDOUT: Authenticating as principal kadmin/admin@DOMAIN.COM with existing credentials.
STDERR: add_principal: Malformed representation of principal while parsing principal
usage: add_principal [options] principal
options are:
[-randkey|-nokey] [-x db_princ_args]* [-expire expdate] [-pwexpire pwexpdate] [-maxlife maxtixlife]
[-kvno kvno] [-policy policy] [-clearpolicy]
[-pw password] [-maxrenewlife maxrenewlife]
[-e keysaltlist]
[{+|-}attribute]
attributes are:
allow_postdated allow_forwardable allow_tgs_req allow_renewable
allow_proxiable allow_dup_skey allow_tix requires_preauth
requires_hwauth needchange allow_svr password_changing_service
ok_as_delegate ok_to_auth_as_delegate no_auth_data_required
where,
[-x db_princ_args]* - any number of database specific arguments.
Look at each database documentation for supported arguments
Administration credentials NOT DESTROYED.
2018-11-21 04:01:16,073 - Failed to create principal, hbase/local4.domain.com@DOMAIN.COM,hbase/ubuntu25.domain.com@DOMAIN.COM,hbase/ubuntu26.domain.com@DOMAIN.COM - Failed to create service principal for hbase/local4.domain.com@DOMAIN.COM,hbase/ubuntu25.domain.com@DOMAIN.COM,hbase/ubuntu26.domain.com@DOMAIN.COM
STDOUT: Authenticating as principal kadmin/admin@DOMAIN.COM with existing credentials.
STDERR: add_principal: Malformed representation of principal while parsing principal
usage: add_principal [options] principal
options are:
[-randkey|-nokey] [-x db_princ_args]* [-expire expdate] [-pwexpire pwexpdate] [-maxlife maxtixlife]
[-kvno kvno] [-policy policy] [-clearpolicy]
[-pw password] [-maxrenewlife maxrenewlife]
[-e keysaltlist]
[{+|-}attribute]
attributes are:
allow_postdated allow_forwardable allow_tgs_req allow_renewable
allow_proxiable allow_dup_skey allow_tix requires_preauth
requires_hwauth needchange allow_svr password_changing_service
ok_as_delegate ok_to_auth_as_delegate no_auth_data_required
where,
[-x db_princ_args]* - any number of database specific arguments.
Look at each database documentation for supported arguments
Administration credentials NOT DESTROYED.
Can anyone check?
... View more
Labels:
09-26-2018
09:26 AM
@Jay Kumar SenSharma any solution to this?
... View more
09-18-2018
01:42 PM
I have configured zookeeper and kafka for kerberos authentication and able produce and consume messages on console. Also I am able to create topics from spark stream but not able to produce messages. Krb5Context.wrap: token=[05 04 00 ff 00 0c 00 00 00 00 00 00 06 4a 2d e7 01 01 00 00 6b 61 66 6b 61 2f 75 62 75 6e 74 75 32 36 2e 6d 73 74 6f 72 6d 2e 63 6f 6d 40 4d 53 54 4f 52 4d 2e 43 4f 4d ae e5 84 77 2e b4 28 97 6a c2 ff 91 ]
BROADCAST
BUSINESS_LOGIC_RESPONSE
INITIAL_SYNC_NODE
JAVATONODE
LOGIN
LOGIN_RESPONSE
REGISTERED_QUEUE
SERVER_DIFF_SYNC_NODE
SPARK_BROADCAST_RESPONSE
SPARK_CONSUMER_TEST
SPARK_CONSUMER_TEST_2
SPARK_CONSUMER_TEST_OUTPUT
SPARK_PRODUCER_TEST_OUTPUT
__consumer_offsets
test
... View more
Labels:
09-17-2018
10:41 AM
Below is the list of kafka-acls on test topic, Krb5Context.wrap: token=[05 04 00 ff 00 0c 00 00 00 00 00 00 27 d9 c3 5b 01 01 00 00 6b 61 66 6b 61 2f 75 62 75 6e 74 75 32 36 2e 6d 73 74 6f 72 6d 2e 63 6f 6d 40 4d 53 54 4f 52 4d 2e 43 4f 4d 9c d0 cc bf 71 74 35 93 38 71 59 a0 ]
Current ACLs for resource `Topic:test`:
user:root has Allow permission for operations: Read from hosts: kafka1.example.com
user:deepak has Allow permission for operations: Read from hosts: kafka1.example.com
user:root has Allow permission for operations: Write from hosts: kafka1.example.com
user:deepak has Allow permission for operations: Write from hosts: kafka1.example.com
... View more
09-16-2018
05:50 PM
@Jay Kumar SenSharma thanks for the response. kafka.acl are set properly for producer and consumer for test topic still giving "UNKNOWN_TOPIC_OR_PARTITION"
... View more
09-15-2018
08:53 PM
I am getting below error on producer while producing message, Krb5Context.wrap: token=[05 04 00 ff 00 0c 00 00 00 00 00 00 18 86 c2 46 01 01 00 00 6b 61 66 6b 61 2d 63 6c 69 65 6e 74 2f 75 62 75 6e 74 75 32 36 2e 6d 73 74 6f 72 6d 2e 63 6f 6d 40 4d 53 54 4f 52 4d 2e 43 4f 4d 46 80 3d 15 92 45 c2 58 cd 12 11 76 ] [2018-09-14 11:12:39,775] WARN Error while fetching metadata with correlation id 1 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:39,874] WARN Error while fetching metadata with correlation id 2 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:39,978] WARN Error while fetching metadata with correlation id 3 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:40,082] WARN Error while fetching metadata with correlation id 4 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:40,186] WARN Error while fetching metadata with correlation id 5 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:40,290] WARN Error while fetching metadata with correlation id 6 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:40,394] WARN Error while fetching metadata with correlation id 7 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) ^C[2018-09-14 11:12:40,409] WARN [Principal=kafka-client/kafka1.example.com@EXAMPLE.COMa]: TGT renewal thread has been interrupted and will exit. (org.apache.kafka.common.security.kerberos.KerberosLogin)
... View more
09-14-2018
07:31 PM
@Jay Kumar SelenSharma thanks for the response, this issue was solved. I am getting below error on producer while producing message, Krb5Context.wrap: token=[05 04 00 ff 00 0c 00 00 00 00 00 00 18 86 c2 46 01 01 00 00 6b 61 66 6b 61 2d 63 6c 69 65 6e 74 2f 75 62 75 6e 74 75 32 36 2e 6d 73 74 6f 72 6d 2e 63 6f 6d 40 4d 53 54 4f 52 4d 2e 43 4f 4d 46 80 3d 15 92 45 c2 58 cd 12 11 76 ] [2018-09-14 11:12:39,775] WARN Error while fetching metadata with correlation id 1 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:39,874] WARN Error while fetching metadata with correlation id 2 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:39,978] WARN Error while fetching metadata with correlation id 3 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:40,082] WARN Error while fetching metadata with correlation id 4 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:40,186] WARN Error while fetching metadata with correlation id 5 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:40,290] WARN Error while fetching metadata with correlation id 6 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) [2018-09-14 11:12:40,394] WARN Error while fetching metadata with correlation id 7 : {test=UNKNOWN_TOPIC_OR_PARTITION} (org.apache.kafka.clients.NetworkClient) ^C[2018-09-14 11:12:40,409] WARN [Principal=kafka-client/kafka1.example.com@EXAMPLE.COMa]: TGT renewal thread has been interrupted and will exit. (org.apache.kafka.common.security.kerberos.KerberosLogin)
... View more
09-11-2018
12:06 PM
@Jay Kumar SenSharma thanks for the response. I had provided ssl.truststore.password.generator but not ssl.truststore.password. Now I have added ssl.truststore.password kafka has started but not able to produce messages, giving error as below, [2018-09-11 01:21:52,015] ERROR Error when sending message to topic test with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
[2018-09-11 01:22:52,020] ERROR Error when sending message to topic test with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
... View more
09-11-2018
12:06 PM
Will use keytab Commit Succeeded
a s d f
[2018-09-11 03:08:16,347] ERROR Error when sending message to topic test with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
[2018-09-11 03:09:16,349] ERROR Error when sending message to topic test with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
[2018-09-11 03:10:16,349] ERROR Error when sending message to topic test with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
a
s
ddd
ffffffffffffffffffff
gggggggggggggggggggggggg
[2018-09-11 03:11:16,350] ERROR Error when sending message to topic test with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
[2018-09-11 03:12:16,351] ERROR Error when sending message to topic test with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
[2018-09-11 03:13:16,352] ERROR Error when sending message to topic test with key: null, value: 1 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
[2018-09-11 03:14:16,352] ERROR Error when sending message to topic test with key: null, value: 3 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
[2018-09-11 03:15:16,353] ERROR Error when sending message to topic test with key: null, value: 20 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
[2018-09-11 03:16:16,353] ERROR Error when sending message to topic test with key: null, value: 24 bytes with error: (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
org.apache.kafka.common.errors.TimeoutException: Failed to update metadata after 60000 ms.
... View more
09-11-2018
05:02 AM
[2018-09-11 00:06:58,404] INFO Successfully logged in. (org.apache.kafka.common.security.authenticator.AbstractLogin)
[2018-09-11 00:06:58,409] INFO [Principal=kafka/kafka1.example.com@EXAMPLE.COM]: TGT refresh thread started. (org.apache.kafka.common.security.kerberos.KerberosLogin)
[2018-09-11 00:06:58,409] INFO [Principal=kafka/kafka1.example.com@EXAMPLE.COM]: TGT valid starting at: Tue Sep 11 00:06:58 EDT 2018 (org.apache.kafka.common.security.kerberos.KerberosLogin)
[2018-09-11 00:06:58,409] INFO [Principal=kafka/kafka1.example.com@EXAMPLE.COM]: TGT expires: Tue Sep 11 10:06:58 EDT 2018 (org.apache.kafka.common.security.kerberos.KerberosLogin)
[2018-09-11 00:06:58,410] INFO [Principal=kafka/kafka1.example.com@EXAMPLE.COM]: TGT refresh sleeping until: Tue Sep 11 08:23:38 EDT 2018 (org.apache.kafka.common.security.kerberos.KerberosLogin)
[2018-09-11 00:06:58,411] FATAL [Kafka Server 1], Fatal error during KafkaServer startup. Prepare to shutdown (kafka.server.KafkaServer)
org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: SSL trust store is specified, but trust store password is not specified.
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:94)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:93)
at org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:63)
at kafka.network.Processor.<init>(SocketServer.scala:422)
at kafka.network.SocketServer.newProcessor(SocketServer.scala:155)
at kafka.network.SocketServer.$anonfun$startup$2(SocketServer.scala:96)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:156)
at kafka.network.SocketServer.$anonfun$startup$1(SocketServer.scala:95)
at kafka.network.SocketServer.$anonfun$startup$1$adapted(SocketServer.scala:90)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at kafka.network.SocketServer.startup(SocketServer.scala:90)
at kafka.server.KafkaServer.startup(KafkaServer.scala:215)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
Caused by: org.apache.kafka.common.KafkaException: SSL trust store is specified, but trust store password is not specified.
at org.apache.kafka.common.security.ssl.SslFactory.createTruststore(SslFactory.java:195)
at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:115)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:91)
... 16 more
[2018-09-11 00:06:58,416] INFO [Kafka Server 1], shutting down (kafka.server.KafkaServer)
[2018-09-11 00:06:58,421] INFO [Socket Server on Broker 1], Shutting down (kafka.network.SocketServer)
[2018-09-11 00:06:58,426] WARN (kafka.utils.CoreUtils$)
java.lang.NullPointerException
at kafka.network.SocketServer.$anonfun$shutdown$3(SocketServer.scala:129)
at kafka.network.SocketServer.$anonfun$shutdown$3$adapted(SocketServer.scala:129)
at scala.collection.IndexedSeqOptimized.foreach(IndexedSeqOptimized.scala:32)
at scala.collection.IndexedSeqOptimized.foreach$(IndexedSeqOptimized.scala:29)
at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:193)
at kafka.network.SocketServer.shutdown(SocketServer.scala:129)
at kafka.server.KafkaServer.$anonfun$shutdown$3(KafkaServer.scala:582)
at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:78)
at kafka.utils.Logging.swallowWarn(Logging.scala:94)
at kafka.utils.Logging.swallowWarn$(Logging.scala:93)
at kafka.utils.CoreUtils$.swallowWarn(CoreUtils.scala:48)
at kafka.utils.Logging.swallow(Logging.scala:96)
at kafka.utils.Logging.swallow$(Logging.scala:96)
at kafka.utils.CoreUtils$.swallow(CoreUtils.scala:48)
at kafka.server.KafkaServer.shutdown(KafkaServer.scala:582)
at kafka.server.KafkaServer.startup(KafkaServer.scala:289)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
[2018-09-11 00:06:58,433] INFO Shutting down. (kafka.log.LogManager)
[2018-09-11 00:06:58,448] INFO Shutdown complete. (kafka.log.LogManager)
[2018-09-11 00:06:58,448] INFO Terminate ZkClient event thread. (org.I0Itec.zkclient.ZkEventThread)
[2018-09-11 00:06:58,555] INFO Session: 0x165b374ac140029 closed (org.apache.zookeeper.ZooKeeper)
[2018-09-11 00:06:58,555] INFO EventThread shut down for session: 0x165b374ac140029 (org.apache.zookeeper.ClientCnxn)
[2018-09-11 00:06:58,562] INFO [Kafka Server 1], shut down completed (kafka.server.KafkaServer)
[2018-09-11 00:06:58,564] FATAL Fatal error during KafkaServerStartable startup. Prepare to shutdown (kafka.server.KafkaServerStartable)
org.apache.kafka.common.KafkaException: org.apache.kafka.common.KafkaException: SSL trust store is specified, but trust store password is not specified.
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:94)
at org.apache.kafka.common.network.ChannelBuilders.create(ChannelBuilders.java:93)
at org.apache.kafka.common.network.ChannelBuilders.serverChannelBuilder(ChannelBuilders.java:63)
at kafka.network.Processor.<init>(SocketServer.scala:422)
at kafka.network.SocketServer.newProcessor(SocketServer.scala:155)
at kafka.network.SocketServer.$anonfun$startup$2(SocketServer.scala:96)
at scala.collection.immutable.Range.foreach$mVc$sp(Range.scala:156)
at kafka.network.SocketServer.$anonfun$startup$1(SocketServer.scala:95)
at kafka.network.SocketServer.$anonfun$startup$1$adapted(SocketServer.scala:90)
at scala.collection.mutable.ResizableArray.foreach(ResizableArray.scala:59)
at scala.collection.mutable.ResizableArray.foreach$(ResizableArray.scala:52)
at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:48)
at kafka.network.SocketServer.startup(SocketServer.scala:90)
at kafka.server.KafkaServer.startup(KafkaServer.scala:215)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
Caused by: org.apache.kafka.common.KafkaException: SSL trust store is specified, but trust store password is not specified.
at org.apache.kafka.common.security.ssl.SslFactory.createTruststore(SslFactory.java:195)
at org.apache.kafka.common.security.ssl.SslFactory.configure(SslFactory.java:115)
at org.apache.kafka.common.network.SaslChannelBuilder.configure(SaslChannelBuilder.java:91)
... 16 more
... View more
Labels:
09-07-2018
01:13 PM
This issue has solved.
... View more
09-07-2018
01:13 PM
I have configured zookeeper and kafka broker for kerberos authentication and both are started but kafka broker have error as below, [2018-09-07 04:40:33,105] ERROR Invalid ACL (kafka.utils.ZKCheckedEphemeral)
[2018-09-07 04:40:33,106] INFO Result of znode creation is: INVALIDACL (kafka.utils.ZKCheckedEphemeral)
[2018-09-07 04:40:33,107] ERROR Error while electing or becoming leader on broker 1 (kafka.server.ZookeeperLeaderElector)
org.I0Itec.zkclient.exception.ZkException: org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL
at org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:68)
at kafka.utils.ZKCheckedEphemeral.create(ZkUtils.scala:1124)
at kafka.server.ZookeeperLeaderElector.elect(ZookeeperLeaderElector.scala:82)
at kafka.server.ZookeeperLeaderElector.$anonfun$startup$1(ZookeeperLeaderElector.scala:51)
at scala.runtime.java8.JFunction0$mcZ$sp.apply(JFunction0$mcZ$sp.java:12)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213)
at kafka.server.ZookeeperLeaderElector.startup(ZookeeperLeaderElector.scala:49)
at kafka.controller.KafkaController.$anonfun$startup$1(KafkaController.scala:681)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
at kafka.utils.CoreUtils$.inLock(CoreUtils.scala:213)
at kafka.controller.KafkaController.startup(KafkaController.scala:677)
at kafka.server.KafkaServer.startup(KafkaServer.scala:224)
at kafka.server.KafkaServerStartable.startup(KafkaServerStartable.scala:39)
at kafka.Kafka$.main(Kafka.scala:67)
at kafka.Kafka.main(Kafka.scala)
Caused by: org.apache.zookeeper.KeeperException$InvalidACLException: KeeperErrorCode = InvalidACL
at org.apache.zookeeper.KeeperException.create(KeeperException.java:121)
... 14 more
... View more
Labels:
09-06-2018
12:42 PM
@Geoffrey Shelton Okot thanks for the response. Kafka has started but its logs has error. PFA kafka-zookeeper-kerberos-invalidacl.txt file,
... View more