Created on 02-17-2015 01:29 AM - edited 09-16-2022 02:21 AM
Hello,
Be tinkering all weekend with Kerberos; still stuck on following during zookeeper start
at org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:87) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:135) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:79) 2015-02-17 03:17:26,942 INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig: Reading configuration from: /var/run/cloudera-scm-agent/process/2275-zookeeper-server/zoo.cfg 2015-02-17 03:17:26,952 INFO org.apache.zookeeper.server.quorum.QuorumPeerConfig: Defaulting to majority quorums 2015-02-17 03:17:26,955 INFO org.apache.zookeeper.server.DatadirCleanupManager: autopurge.snapRetainCount set to 5 2015-02-17 03:17:26,955 INFO org.apache.zookeeper.server.DatadirCleanupManager: autopurge.purgeInterval set to 24 2015-02-17 03:17:26,957 INFO org.apache.zookeeper.server.DatadirCleanupManager: Purge task started. 2015-02-17 03:17:26,965 INFO org.apache.zookeeper.server.quorum.QuorumPeerMain: Starting quorum peer 2015-02-17 03:17:26,969 INFO org.apache.zookeeper.server.DatadirCleanupManager: Purge task completed. 2015-02-17 03:17:27,037 ERROR org.apache.zookeeper.server.quorum.QuorumPeerMain: Unexpected exception, exiting abnormally java.io.IOException: Could not configure server because SASL configuration did not allow the ZooKeeper server to authenticate itself properly: javax.security.auth.login.LoginException: mgmt4-ib.urika-xa.com at org.apache.zookeeper.server.ServerCnxnFactory.configureSaslLogin(ServerCnxnFactory.java:207) at org.apache.zookeeper.server.NIOServerCnxnFactory.configure(NIOServerCnxnFactory.java:87) at org.apache.zookeeper.server.quorum.QuorumPeerMain.runFromConfig(QuorumPeerMain.java:135) at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116) at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:79)
Everything through wizard seems to work until it starts the cluster;
kadmin
yarn/urika-xa42@URIKA-XA.COM
yarn/urika-xa43@URIKA-XA.COM
yarn/urika-xa44@URIKA-XA.COM
yarn/urika-xa45@URIKA-XA.COM
yarn/urika-xa46@URIKA-XA.COM
yarn/urika-xa47@URIKA-XA.COM
yarn/urika-xa48@URIKA-XA.COM
yarn/urika-xa4@URIKA-XA.COM
yarn/urika-xa5@URIKA-XA.COM
yarn/urika-xa6@URIKA-XA.COM
yarn/urika-xa7@URIKA-XA.COM
yarn/urika-xa8@URIKA-XA.COM
yarn/urika-xa9@URIKA-XA.COM
zookeeper/mgmt1-ib@URIKA-XA.COM
zookeeper/mgmt2-ib@URIKA-XA.COM
zookeeper/mgmt3-ib@URIKA-XA.COM
kadmin:
[libdefaults]
default_realm = URIKA-XA.COM
dns_lookup_realm = false
dns_lookup_kdc = false
ticket_lifetime = 86400
renew_lifetime = 604800
forwardable = true
default_tgs_enctypes = rc4-hmac
default_tkt_enctypes = rc4-hmac
permitted_enctypes = rc4-hmac
udp_preference_limit = 1
[realms]
URIKA-XA.COM = {
kdc = mgmt4-ib
admin_server = mgmt4-ib
}
[root@mgmt4-ib cloudera-scm-server]#
Tested kadmin with cloudera key:
kadmin -k -t /etc/cloudera-scm-server/cmf.keytab -p cloudera-scm/admin@URIKA-XA.COM -r URIKA-XA.COM
Authenticating as principal cloudera-scm/admin@URIKA-XA.COM with keytab /etc/cloudera-scm-server/cmf.keytab.
kadmin:
default security realm:
URIKA-XA.COM
[root@mgmt4-ib cloudera-scm-server]# cat /var/kerberos/krb5kdc/kdc.conf
[realms]
URIKA-XA.COM = {
#master_key_type = aes256-cts
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
max_life = 24h 0m 0s
max_renewable_life = 7d 0h 0m 0s
}
Created 02-16-2017 08:47 AM
You most likely have a mismatch between the kerberos principal defined on the KDC and the keytab on you cluster nodes. Give this a try and it should fix you right up...
- Stop all services, including the Cloudera Management Service.
- Go to: Administration > Security > Kerberos Credentials
- Select all Principals
- Click "Regenerate Selected"
- Restart all services and hope for 'green lights' 😉
Created 06-19-2017 03:42 AM
Created 04-27-2022 06:13 AM
Hi @zero ,
Did you try out commenting the renew_lifetime parameter in /etc/krb5.conf. I think I too had the same issue and this resolved my error. Mine was a CDP 7.1.4 cluster, but have faced similar issues in HDP also.
Thanks,
Vivek