Created 06-03-2024 11:42 PM
Hallo,
When to enable Kerberos via ambari, I am facing the following window popup at the time of Testing client after client installation saying
in my log ambari-server listed below
2024-06-04 06:27:43,380 WARN [agent-report-processor-2] ActionManager:162 - The task 76 is not in progress, ignoring update
2024-06-04 06:27:43,861 INFO [ambari-client-thread-6248] AmbariManagementControllerImpl:4086 - Received action execution request, clusterName=hadoop, request=isCommand :true, action :null, command :KERBEROS_SERVICE_CHECK, inputs :{HAS_RESOURCE_FILTERS=true}, resourceFilters: [RequestResourceFilter{serviceName='KERBEROS', componentName='null', hostNames=[]}], exclusive: false, clusterName :hadoop
2024-06-04 06:27:44,149 WARN [ambari-client-thread-6248] KDCKerberosOperationHandler:329 - Failed to kinit as the KDC administrator user, admin/admin@HADOOP.COM:
ExitCode: 1
STDOUT:
STDERR: kinit: Server not found in Kerberos database while getting initial credentials
2024-06-04 06:27:44,151 ERROR [ambari-client-thread-6248] KerberosHelperImpl:2507 - Cannot validate credentials: org.apache.ambari.server.serveraction.kerberos.KerberosAdminAuthenticationException: Invalid KDC administrator credentials.
The KDC administrator credentials must be set as a persisted or temporary credential resource.This may be done by issuing a POST (or PUT for updating) to the /api/v1/clusters/:clusterName/credentials/kdc.admin.credential API entry point with the following payload:
{
"Credential" : {
"principal" : "(PRINCIPAL)", "key" : "(PASSWORD)", "type" : "(persisted|temporary)"}
}
}
2024-06-04 06:27:44,152 ERROR [ambari-client-thread-6248] CreateHandler:80 - Bad request received: Invalid KDC administrator credentials.
The KDC administrator credentials must be set as a persisted or temporary credential resource.This may be done by issuing a POST (or PUT for updating) to the /api/v1/clusters/:clusterName/credentials/kdc.admin.credential API entry point with the following payload:
{
"Credential" : {
"principal" : "(PRINCIPAL)", "key" : "(PASSWORD)", "type" : "(persisted|temporary)"}
}
}
2024-06-04 06:27:44,578 WARN [agent-report-processor-1] ActionManager:162 - The task 75 is not in progress, ignoring update
can anyone help me, please..
Created 06-04-2024 11:57 PM
Hi @rizalt , Have you tried logging in with "kinit admin/admin@HADOOP.COM" from one of your cluster nodes or ambari server to see if krb5.conf is fine and can find this user/principal in the KDC server with the given password?
Created 06-05-2024 12:23 AM
@Majed im my cluster master1,slave1 & slave2 kinit logged in fine without errors, listed below
root@master1:~# kinit admin/admin@HADOOP.COM
Password for admin/admin@HADOOP.COM:
root@master1:~# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: admin/admin@HADOOP.COM
Valid starting Expires Service principal
06/05/2024 07:17:16 06/05/2024 17:17:16 krbtgt/HADOOP.COM@HADOOP.COM
renew until 06/05/2024 07:17:16
root@master1:~#
root@slave1:~# kinit admin/admin@HADOOP.COM
Password for admin/admin@HADOOP.COM:
root@slave1:~# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: admin/admin@HADOOP.COM
Valid starting Expires Service principal
06/05/2024 07:19:26 06/05/2024 17:19:26 krbtgt/HADOOP.COM@HADOOP.COM
renew until 06/05/2024 07:19:26
root@slave1:~#
root@slave2:~# kinit admin/admin@HADOOP.COM
Password for admin/admin@HADOOP.COM:
root@slave2:~# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: admin/admin@HADOOP.COM
Valid starting Expires Service principal
06/05/2024 07:20:19 06/05/2024 17:20:19 krbtgt/HADOOP.COM@HADOOP.COM
renew until 06/05/2024 07:20:19
root@slave2:~#
Created 06-05-2024 12:51 AM
There are a couple of things to validate.
Step 1 Pre-requisites
Step 2: Check your /etc/host file ensure your KDC host is assigned the domain HADOOP.COM to match your KDC credentials
Step 3: Once that matches then edit the Kerberos configuration file (/etc/krb5.conf) on all nodes to point to your KDC you can scramble the sensitive info and share
[libdefaults]
default_realm = HADOOP.COM
dns_lookup_realm = false
dns_lookup_kdc = false
[realms]
HADOOP.COM = {
kdc = kdc.hadoop.com
admin_server = admin.hadoop.com
}
[domain_realm]
.hadoop.com = HADOOP.COM
hadoop.com = HADOOP.COM
Step 4: Locate your kadm5.acl file and ensure it looks like this
Step 5: Restart the KDC and admin servers as root or with sudo
# systemctl restart krb5kdc
# systemctl restart kadmin
Step 6: Check Kerberos Ticket: Ensure that the Kerberos ticket is obtained correctly.
If your setup is correct you will see an output like below
Ticket cache: FILE:/tmp/krb5cc_1000
Default principal: hdfs/hostname@HADOOP.COM
Valid starting Expires Service principal
06/05/2024 09:50:21 06/06/2024 09:50:21 krbtgt/HADOOP.COM@HADOOP.COM
renew until 06/05/2024 09:50:21
06/05/2024 09:50:22 06/06/2024 09:50:21 HTTP/hostname@HADOOP.COM
renew until 06/05/2024 09:50:21
Hope that helps
Created on 06-05-2024 01:06 AM - edited 06-05-2024 01:07 AM
@Shelton
/etc/host
/etc/hosts
127.0.0.1 localhost
192.168.122.10 master1.hadoop.com
192.168.122.11 slave1.hadoop.com
192.168.122.12 slave2.hadoop.com
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
/etc/krb5.conf
[libdefaults]
renew_lifetime = 7d
forwardable = true
default_realm = HADOOP.COM
ticket_lifetime = 24h
dns_lookup_realm = false
dns_lookup_kdc = false
default_ccache_name = /tmp/krb5cc_%{uid}
#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
[logging]
default = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
kdc = FILE:/var/log/krb5kdc.log
[realms]
HADOOP.COM = {
admin_server = master1.hadoop.com
kdc = master1.hadoop.com
}
kadm5.acl
event create ticket show error
root@master1:~# systemctl restart krb5-kdc
root@master1:~# systemctl restart krb5-admin-server
root@master1:~# kinit -kt /etc/security/keytabs/hdfs.keytab hdfs/master1.hadoop.com@HADOOP.COM
kinit: Client 'hdfs/master1.hadoop.com@HADOOP.COM' not found in Kerberos database while getting initial credentials
Created 06-05-2024 04:33 AM
Hi @rizalt , You want to verify if the principal exists in the KDC admin database ?
kadmin: listprincs hdfs*
Created on 06-05-2024 05:38 PM - edited 06-05-2024 05:39 PM
@Majeti . my issue is when Ambari tests Kerberos client always shows a dialog box like this
My previous settings were like this
I have the principal admin/admin@HADOOP.COM and the password is correct,
root@master1:~# kadmin -p admin/admin
Authenticating as principal admin/admin with password.
Password for admin/admin@HADOOP.COM:
kadmin: listprincs
HTTP/master1.hadoop.com@HADOOP.COM
K/M@HADOOP.COM
admin/admin@HADOOP.COM
admin/master1.hadoop.com@HADOOP.COM
hdfs/master1.hadoop.com@HADOOP.COM
kadmin/admin@HADOOP.COM
kadmin/changepw@HADOOP.COM
krbtgt/HADOOP.COM@HADOOP.COM
Any suggestions for this issue?
Created 06-06-2024 12:20 AM
Hi @rizalt , I am not sure if you are hitting this known issue https://docs.cloudera.com/runtime/7.1.2/release-notes/topics/rt-known-issues-ambari.html . You can try the workaround mentioned here for now.
Created 06-06-2024 12:12 PM
@rizalt
Did you see the same entry in the krb5.conf that I suggested you add?
[domain_realm] .hadoop.com = HADOOP.COM hadoop.com = HADOOP.COM
In the Kerberos setup UI you should also include
HADOOP.COM , . HADOOP.COM
Check a solution I offered
Error while enabling Kerberos on ambari
Hope that helps
Created 06-06-2024 05:41 AM
@rizalt
Make a backup of your krb5.conf and modify it like below
# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/
[logging]
default = FILE:/var/log/krb5libs.log
kdc = FILE:/var/log/krb5kdc.log
admin_server = FILE:/var/log/kadmind.log
[libdefaults]
dns_lookup_realm = false
ticket_lifetime = 24h
renew_lifetime = 7d
forwardable = true
ticket_lifetime = 24h
dns_lookup_realm = false
dns_lookup_kdc = false
default_ccache_name = /tmp/krb5cc_%{uid}
#default_tgs_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
#default_tkt_enctypes = aes des3-cbc-sha1 rc4 des-cbc-md5
[realms]
HADOOP.COM = {
admin_server = master1.hadoop.com
kdc = master1.hadoop.com
}
[domain_realm]
.master1.hadoop.com = HADOOP.COM
master1.hadoop.com = HADOOP.COM
Then restart the KDC and retry