Member since
06-21-2017
48
Posts
1
Kudos Received
0
Solutions
02-27-2019
09:53 AM
Hi all, After setting up a fresh kerberized HDP 3.1 cluster with Hive LLAP, Spark2 and Livy, we're having trouble connecting to Hive's database through Livy. Pyspark from shell works without the problem, but something breaks when using Livy. 1. Livy settings are Ambari default, with additionally specified jars and pyfiles for the HWC connector, spark.sql.hive.hiveserver2.jdbc.url and spark.security.credentials.hiveserver2.enabled true. These are enough for pyspark shell to work without problems. 2. Connection is made through the latest HWC connector described here, since apparantly this is the only one that works for Hive 3 and Spark2. problem: 1. When spark.master is set to yarn client mode (See for example the comment here), the connector appends a principal "hive/_HOST@DOMAIN" and the connection returns GSS error - failing to find any Kerberos tgt (although, the ticket is there and livy has access to the hiveserver2). 2. When spark.master is set to yarn cluster mode, ";auth=delegationToken" is appended to the connection, where the error follows that "PLAIN" connection is made, where a kerberized one is expected. Notes: tried various settings -- zookeeper jdbc links vs direct through port 10500, hive.doAs = true vs false, various principals, but nothing works. Note2: everything works fine when connecting both through beeline (to hive at 10500 port) and through pyspark shell. Note3: HWC Connection snippet (from examples): from pyspark_llap import HiveWarehouseSession
hive = HiveWarehouseSession.session(spark).build()
hive.showDatabases().show(100) Any ideas? Feel like some setting on Livy is missing, especially weird seeing that "failed to find any Kerberos tgt" - where is it looking for it and why doesn't it see the ticker from "kinit"? @Geoffrey Shelton Okot @Hyukjin Kwon @Eric Wohlstadter
... View more
Labels:
01-17-2019
09:20 AM
@Geoffrey Goldman Important question (should I post it as a new question? It does kind of follow up from your latest comment, so I post it here): so how should ideally the "default_tkt_enctypes", "default_tgs_enctypes" and "permitted_enctypes" should look like for a normal HDP cluster (not a test sandbox), which would work 100% of the times and also provide high level security? 1. When I've tried the default suggested settings of "des3-cbc-sha1 des3-hmac-sha1 des3-cbc-sha1-kd", I would get errors that the security level was too low. I've then further added "aes256-cts-hmac-sha1-96", but it seems more than one decent enctype is required for proper encryption? 2. The default Kerberos settings, suggested by Ambari, also suggests "des3-cbc-sha1 des3-hmac-sha1 des3-cbc-sha1-kd", but comments it out by default, so I guess it ends up using some default values, which doesn't seem stable (what if the default will change over time or new version of kerberos). 3. Now I've added all possible configs, "aes256-cts-hmac-sha1-96 aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal", but when using ``xst -k `` from ``kadmin`` service, it exports arounds 2-3 entries in the keytab with different encryptions, but not all 8+. Suggesting, that only some types are actually important.
... View more
01-17-2019
08:58 AM
1 Kudo
@Geoffrey Shelton Okot Thanks, I think I solved it. You know what was the problem? The Ambari wasn't creating/re-creating keytabs and principals for HTTP/_HOST@DOMAIN.COM - had to do that by hand. Plus, with the correct encryption... Thank you for your help! It's just interesting: did you have to create HTTP/_HOST principal, or did the Ambari create it automatically for you? If that's the case, I wonder why it didn't on my machine. By the way, I'm using openLDAP for Ldap/Kerberos database.
... View more
01-16-2019
03:40 PM
Hello all, after fresh kerberization of Ambari 2.7.3 / HDP 3 cluster, the HDFS namenode isn't able to start because the hdfs user can't talk to the webhdfs. The following error is returned: GSSException: Failure unspecified at GSS-API level (Mechanism level: Checksum failed) It is not only from ambari: I can recreate this error from a simple curl call from hdfs user: su - hdfs
curl --negotiate -u : http://datanode:50070/webhdfs/v1/tmp?op=GETFILESTATUS
Which returns </head>
<body><h2>HTTP ERROR 403</h2>
<p>Problem accessing /webhdfs/v1/tmp. Reason:
<pre> GSSException: Failure unspecified at GSS-API level (Mechanism level: Checksum failed)</pre></p>
</body>
</html>
Overall permission for this user should be in tact, since I'm able to run hdfs operations from shell and kinit without problems. What could be the problem? I've tried recreating keytabs several times, and fiddling with ACL settings on the config, but nothing works. What principal is WEBHDFS expecting? The same results are when I'm trying accessing it with HTTP/host@EXAMPLE.COM principal. NB: I'll add that there's nothing fancy in the HDFS settings, mainly stock/default config. NB2: I will add, that I've added all possible encryption types to krb5.conf as I could find, but none if these helped: default_tkt_enctypes = aes256-cts-hmac-sha1-96 aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
default_tgs_enctypes = aes256-cts-hmac-sha1-96 aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
permitted_enctypes = aes256-cts-hmac-sha1-96 aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
@Geoffrey Shelton Okot
... View more
Labels:
01-16-2019
12:09 PM
This is to the point! I got the same error when tried changing the default Ambari suggested enctypes to my custom ones. The custom ones work fine though with MIT KDC, but apparantly not with Ambari.
... View more
01-15-2019
03:54 PM
Thank you, I had this exact issue with same errors and nothing in the comment discussion helped. However, after several ambari-server restarts and dumb retries of the "Kerberos wizard" with similar settings magically resolved this. I'm not sure at all what was the problem..
... View more
01-15-2019
02:02 PM
Thanks, I've noticed that too, after posting. While -S kadmin/admin worked, the -S kadmin/FQDN didn't. So reconfiguring this part on the KDC solved the problem. It's just interesting that I didn't bump into this on HDP 2.6 Ambari. About the future release of Ambari -- any ETA yet? 🙂
... View more
01-15-2019
12:21 PM
@Geoffrey Shelton Okot @huzaira bashir Did you manage to solve this yet? What was the problem?
... View more
01-15-2019
11:16 AM
Hello all, I'm trying to kerberize the Ambari 2.7.3 cluster. However, during the setup, I get the following error: Caused by: org.apache.ambari.server.serveraction.kerberos.KerberosOperationException: Unexpected error condition executing the kadmin command. STDERR: kadmin: Matching credential not found (filename: /tmp/ambari_krb_142308985016794830cc) while initializing kadmin interface
at org.apache.ambari.server.serveraction.kerberos.MITKerberosOperationHandler.invokeKAdmin(MITKerberosOperationHandler.java:323)
at org.apache.ambari.server.serveraction.kerberos.MITKerberosOperationHandler.principalExists(MITKerberosOperationHandler.java:123)
at org.apache.ambari.server.serveraction.kerberos.KerberosOperationHandler.testAdministratorCredentials(KerberosOperationHandler.java:314)
at org.apache.ambari.server.controller.KerberosHelperImpl.validateKDCCredentials(KerberosHelperImpl.java:2133) All of the authentication settings are okay, because I am able to kinit and use the kadmin interface from shell. It seems that the problem is that Ambari tries to do the following: kinit -p admin/admin@EXAMPLE.COM
kadmin -c /tmp/ambari_krb_... While it should be doing the following: kinit -S kadmin/admin@EXAMPLE.COM admin/admin@EXAMPLE.COM
kadmin -c /tmp/ambari_krb... I've tried replicating the two settings and confirmed my guess. The second code works from the shell. Further, If I intercept the temporarily generated credentials by ambari with my own, the code works. How can I fix this behaviour? This seem like a bug in Ambari code -- which part should I edit to fix this?
... View more
Labels:
01-15-2019
11:14 AM
Hello all, I'm trying to kerberize the Ambari 2.7.3 cluster. However, during the setup, I get the following error: Caused by: org.apache.ambari.server.serveraction.kerberos.KerberosOperationException: Unexpected error condition executing the kadmin command. STDERR: kadmin: Matching credential not found (filename: /tmp/ambari_krb_142308985016794830cc) while initializing kadmin interface
at org.apache.ambari.server.serveraction.kerberos.MITKerberosOperationHandler.invokeKAdmin(MITKerberosOperationHandler.java:323)
at org.apache.ambari.server.serveraction.kerberos.MITKerberosOperationHandler.principalExists(MITKerberosOperationHandler.java:123)
at org.apache.ambari.server.serveraction.kerberos.KerberosOperationHandler.testAdministratorCredentials(KerberosOperationHandler.java:314)
at org.apache.ambari.server.controller.KerberosHelperImpl.validateKDCCredentials(KerberosHelperImpl.java:2133) All of the authentication settings are okay, because I am able to kinit and use the kadmin interface from shell. It seems that the problem is that Ambari tries to do the following: kinit -p admin/admin@EXAMPLE.COM
kadmin -c /tmp/ambari_krb_... While it should be doing the following: kinit -S kadmin/admin@EXAMPLE.COM admin/admin@EXAMPLE.COM
kadmin -c /tmp/ambari_krb... I've tried replicating the two settings and confirmed my guess. The second code works from the shell. Further, If I intercept the temporarily generated credentials by ambari with my own, the code works. How can I fix this behaviour? This seem like a bug in Ambari code -- which part should I edit to fix this?
... View more
Labels:
- « Previous
-
- 1
- 2
- Next »