Member since
09-29-2015
362
Posts
242
Kudos Received
63
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1030 | 03-14-2019 01:00 PM | |
1293 | 01-23-2019 04:19 PM | |
6038 | 01-15-2019 01:59 PM | |
3377 | 01-15-2019 01:57 PM | |
8144 | 12-06-2018 02:01 PM |
06-01-2017
09:12 AM
I assumed that this was in the documentation, but a quick search revealed that it is not. After upgrading either Ambari or HDP (or both), you should regenerate the missing keytab files and restart the services by Log into Ambari using an Ambari Administrator account Go to the Kerberos Administrator page (Admin -> Kerberos) Click on Regenerate Keytabs button On the first page of the dialog that appears, click on the checkbox for "Only regenerate keytabs for missing hosts and components" Continue to the next page Click on the checkbox for "Automatically restart components after keytab regeneration" Complete the dialog As of Ambari 2.5.x and below, Ambari does not have a way to automatically create new Kerberos identities or keytab files during either the Ambari or stack upgrade processes. So the user is expected to do this manually using the steps above.
... View more
05-22-2017
09:20 AM
I think that I was able to get it running. I started the server and did not see the error message you are getting. One thing that I see is that the server may be looking for a file named "livy.server.kerberos.keytab" or maybe it cannot find that property. Is it possible a different configuration file is being picked up?
... View more
05-19-2017
09:31 PM
@David Tam
I think I found the issue after walking through the steps in the doc you provided - https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.0/bk_command-line-installation/content/configure_livy.html. The documentation appears to be incorrect and there is an inconsistency with the Livy Kerberos identity. Step 8a contains kadmin.local -q "addprinc -randkey livy@EXAMPLE.COM"
kadmin.local -q "xst -k /etc/security/keytabs/livy.headless.keytab livy@EXAMPLE.COM" Then step 8c contains livy.server.launch.kerberos.keytab /etc/security/keytabs/livy.headless.keytab
livy.server.launch.kerberos.principal livy/_HOST@EXAMPLE.COM The disconnect is the principal name: livy@EXAMPLE.COM vs livy/_HOST@EXAMPLE.COM. One or the other needs to be used. Technically, Livy Server is not an interactive user, so a service principal should be created for it. So I would choose to use the livy/_HOST@EXAMPLE.COM representation of the Kerberos identity. However it might be easier to choose the user identity and go with livy@EXAMPLE.COM. In any case, the data in the configuration needs to be consistent. If you chose to go with the user Kerberos identity, then the following values need to be set in the livy.conf file: livy.server.launch.kerberos.keytab /etc/security/keytabs/livy.headless.keytab
livy.server.launch.kerberos.principal livy@EXAMPLE.COM I assume that the user principal has already been created in the KDC and the keytab file was created using it. So after restarting the Livy Server, all should work. If you choose to go the service principal route, then the following values need to be set in the livy.conf file: livy.server.launch.kerberos.keytab /etc/security/keytabs/livy.service.keytab
livy.server.launch.kerberos.principal livy/_HOST@EXAMPLE.COM However, I am not sure if Livy automatically translates _HOST to the relevant host's name. So it might be safer (but less portable) to explicitly put the host name there. You can get the correct hostname by issuing a the following command hostname -f For example: [root@c6403 ~]# hostname -f
c6403.ambari.apache.org Using this value, manually replace _HOST: livy.server.launch.kerberos.keytab /etc/security/keytabs/livy.service.keytab
livy.server.launch.kerberos.principal livy/c6403.ambari.apache.org@EXAMPLE.COM Then you need to create the relevant Kerberos principal and keytab file,. If possible, you can do this from the Livy server host, using kadmin, rather than kadmin.local: kadmin -p <kdc admin principal> -q "addprinc -randkey livy/`hostname -f`@EXAMPLE.COM"
kadmin -p <kdc admin principal> -q "xst -k /etc/security/keytabs/livy.service.keytab livy/`hostname -f`@EXAMPLE.COM" For example: [root@c6403 ~]# kadmin -p admin/admin -q "addprinc -randkey livy/`hostname -f`@EXAMPLE.COM"
Authenticating as principal admin/admin with password.
Password for admin/admin@EXAMPLE.COM:
WARNING: no policy specified for livy/c6403.ambari.apache.org@EXAMPLE.COM; defaulting to no policy
Principal "livy/c6403.ambari.apache.org@EXAMPLE.COM" created.
[root@c6403 ~]# kadmin -p admin/admin -q "xst -k /etc/security/keytabs/livy.headless.keytab livy/`hostname -f`@EXAMPLE.COM"
Authenticating as principal admin/admin with password.
Password for admin/admin@EXAMPLE.COM:
Entry for principal livy/c6403.ambari.apache.org@EXAMPLE.COM with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:/etc/security/keytabs/livy.headless.keytab.
Entry for principal livy/c6403.ambari.apache.org@EXAMPLE.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:/etc/security/keytabs/livy.headless.keytab.
Entry for principal livy/c6403.ambari.apache.org@EXAMPLE.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:/etc/security/keytabs/livy.headless.keytab.
Entry for principal livy/c6403.ambari.apache.org@EXAMPLE.COM with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:/etc/security/keytabs/livy.headless.keytab.
Entry for principal livy/c6403.ambari.apache.org@EXAMPLE.COM with kvno 2, encryption type des-hmac-sha1 added to keytab WRFILE:/etc/security/keytabs/livy.headless.keytab.
Entry for principal livy/c6403.ambari.apache.org@EXAMPLE.COM with kvno 2, encryption type des-cbc-md5 added to keytab WRFILE:/etc/security/keytabs/livy.headless.keytab. If you need to use kadmin.local on the KDC server host, you can do the following the manually copy the resulting keytab file into place on the Livy server host: kadmin.local -q "addprinc -randkey livy/`hostname -f`@EXAMPLE.COM"
kadmin.local -q "xst -k /etc/security/keytabs/livy.service.keytab livy/`hostname -f`@EXAMPLE.COM" Meanwhile I will see if the documentation can be fixed.
... View more
05-19-2017
04:00 PM
If you klist that keytab file, does it show entries for livy/<current hostname>@LBG.COM? klist -kte /etc/security/keytabs/livy.headless.keytab Also, if you look at the running Livy server process, does the relevant user or group have read access to /etc/security/keytabs/livy.headless.keytab?
... View more
05-19-2017
03:47 PM
@David Tam Can you verify that /etc/security/keytabs/livy.headless.keytab exists on the host where Livy is installed? Also make sure that the file is readable by the user executing the Livy Server. If the file does not exist, make sure you run through the Step 8 in the doc you referred to.
... View more
05-11-2017
07:58 PM
When executing the keytool command, if the specified keystore file does not exist it will be created. However, the parent directory need to exist. [root@c6401 ~]# ls -l /etc/security/clientKeys
ls: cannot access /etc/security/clientKeys: No such file or directory
[root@c6401 ~]# mkdir -p /etc/security/clientKeys
[root@c6401 ~]# /usr/jdk64/jdk1.8.0_77/bin/keytool -genkey -keystore /etc/security/clientKeys/keystore.jks -alias nwk8
Enter keystore password:
Re-enter new password:
What is your first and last name?
[Unknown]: nwk8.example.com
What is the name of your organizational unit?
[Unknown]:
What is the name of your organization?
[Unknown]:
What is the name of your City or Locality?
[Unknown]:
What is the name of your State or Province?
[Unknown]:
What is the two-letter country code for this unit?
[Unknown]:
Is CN=nwk8.example.com, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown correct?
[no]: yes
Enter key password for <nwk8>
(RETURN if same as keystore password):
[root@c6401 ~]# ls -l /etc/security/clientKeys
total 4
-rw-r--r-- 1 root root 1311 May 11 19:52 keystore.jks
You can optionally use OpenSSL to generate the keys and certificates, but you may need to import them into a Java Keystore for them to be usable by Hadoop. It is unclear to me whether ssl.server.truststore.type can be set to anything other then JKS.
... View more
05-10-2017
01:26 PM
My only guess is that maxiqtesting1.lti.com or port 88 (UDP or TCP) is not reachable for some reason. Make sure there are no firewalls in the way and that the Ambari server host can resolve that DNS name. If you enable debug logging, you might be able to get more information since the cause will be reported as well. To turn on the debug log for this, add the following to /etc/ambari-server/conf/log4j.properties: log4j.logger.org.apache.ambari.server.KdcServerConnectionVerification
... View more
05-10-2017
09:07 AM
@PATHAN SHEBAZ RUSTUM Take a look at the Ambari server log (/var/log/ambari-server/ambari-server.log) and see if are any interesting messages in there related to this. I assume the KDC is an MIT KDC
... View more
03-27-2017
02:46 PM
@Elvis Zhang I have seen this a few times, but assume it was related to load on the Ambari server host... maybe something related to open files or pipes. Does this issue happen every time you attempt to Regenerate Keytab files? What version of Ambari are you running? A retry loop was added in Ambari 2.5 to help things like this. So when that comes out, you should upgrade to that to see if it helps your issue.
... View more
03-24-2017
03:26 PM
@Eric Hanson I don't have an official opinion on this. It really depends on the available resources. If the cluster is really large, then it may be beneficial to put the KDC on its own VM; but for a small cluster (<15 hosts), that may be a bit overkill and the least utilized host for the KDC maybe sufficient. That said, the workload could be spread out by placing a one or more slave KDCs around the cluster, There is also the option to separate the kadmin and krb5kdc processes to different hosts - though this is more for security concerns than for performance or resource concerns. One thing to keep in mind. For Ambari server versions 2.5.0 and below, it appears that the cluster does an abnormal amount of kinit's. This is currently being looked into. So far, it is unclear whether this is a bug, expected behavior, or something in between. The effect of this issue on a small cluster is minimal and not noticeable over a short period of time. On a large cluster (say 900 nodes), the Kerberos log files tend to get large quickly. Performance of the KDC on such a cluster, even when the KDC exists on a host with Hadoop services, does not appear to be affected. The main issue is merely log file size. However, if an issue is found and fixed, less kinit's couldn't hurt. 🙂
... View more