Member since
02-28-2022
171
Posts
14
Kudos Received
17
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
425 | 07-07-2025 06:35 AM | |
432 | 06-17-2025 09:42 AM | |
1619 | 04-02-2025 07:43 AM | |
630 | 10-18-2024 12:29 PM | |
10911 | 09-05-2024 09:06 AM |
08-22-2022
12:43 PM
hi @JQUIROS if create another keytab with the SPN below: "livy-http/hostname@DOMAIN.LOCAL" works, no problems. the problem is when using HTTP
... View more
08-22-2022
12:36 PM
hi@JQUIROS , should "kutil" command be run on cluster host or AD host?
... View more
08-22-2022
11:44 AM
hello cloudera community, we are trying to create a keytab with the main one: "HTTP/hostname@DOMAIN.LOCAL" with the command: ktpass -princ HTTP/hostname@DOMAIN.LOCAL -mapuser livy-http -crypto ALL -ptype KRB5_NT_PRINCIPAL -pass password2022 -target domain.local -out c:\temp\livy-http.keytab but I try to validate the ticket with this keytab returns the error: Exception: krb_error 24 Pre-authentication information was invalid (24) Pre-authentication information was invalid KrbException: Pre-authentication information was invalid (24) at sun.security.krb5.KrbAsRep.<init>(Unknown Source) at sun.security.krb5.KrbAsReqBuilder.send(Unknown Source) at sun.security.krb5.KrbAsReqBuilder.action(Unknown Source) at sun.security.krb5.internal.tools.Kinit.<init>(Unknown Source) at sun.security.krb5.internal.tools.Kinit.main(Unknown Source) Caused by: KrbException: Identifier doesn't match expected value (906) at sun.security.krb5.internal.KDCRep.init(Unknown Source) at sun.security.krb5.internal.ASRep.init(Unknown Source) at sun.security.krb5.internal.ASRep.<init>(Unknown Source) ... 5 more this user "livy-http" is already created in AD and with the SPN "HTTP/hostname@DOMAIN.LOCAL" attached to it what are we doing wrong?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Cloudera Manager
08-19-2022
08:00 AM
hi @Asfahan checking the file with the same name as tablet_id in the "consensus-meta" directory, it shows that the file is 11K in all TS as you can see in the screenshot below, the tablet_id has different sizes between the 3 TS
... View more
08-19-2022
06:26 AM
oi @Asfahan 1 - this parameter was defined in the kudu settings on cloudera: default_num_replicas = 3 2 - below is the result of the "fsck" command you asked for: command did not return tablet_id size on TS hosts
... View more
08-19-2022
05:51 AM
hi @Deepan_N by running the command below directly in python3: r0.headers["www-authenticate"] returns the following error: Python 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> r0.headers["www-authenticate"] Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'r0' is not defined >>> below is the screenshot of the commands executed in bash:
... View more
08-18-2022
11:19 AM
hello cloudera community, we identified that there is tablet_id with different volume among the cluster's tablet servers there is tablet_id with 700MB on one tablet server and the same tablet_id with 190MB on another tablet server how can we equalize this volumetry between tablet_id on all tablet servers we believe that with this different volumetry between the tablet_id, some hosts are using a lot of memory, practically 90% of memory of what was defined. 1 - we are using cloudera express 5.16.x 2 - a rebalance has already been performed on kudu recently
... View more
Labels:
- Labels:
-
Apache Kudu
-
Cloudera Manager
08-17-2022
10:52 AM
hi @Deepan_N the "kinit" command was successfully executed the "klist" command returns the date of the validated ticket and I have more than 10h to use the ticket. Both modes have been tested: curl: curl -v -u: --negotiate -X POST --data '{"className": "org.apache.spark.examples.SparkPi", "jars": ["/tmp/spark-examples-1.6.0-cdh5.16.1-hadoop2.6.0-cdh5.16.1.jar"], "name": "livy-test", "file": "hdfs:///tmp/spark-examples-1.6.0-cdh5.16.1-hadoop2.6.0-cdh5.16.1.jar", "args": [10]}' -H "Content-Type: application/json" -H "X-Requested-By: User" http://localhost:8998/batches python: import json, pprint, requests, textwrap from requests_kerberos import HTTPKerberosAuth host='http://localhost:8998' headers = {'Requested-By': 'livy','Content-Type': 'application/json','X-Requested-By': 'livy'} auth=HTTPKerberosAuth() data={'className': 'org.apache.spark.examples.SparkPi','jars': ["/tmp/spark-examples-1.6.0-cdh5.16.1-hadoop2.6.0-cdh5.16.1.jar"],'name': 'livy-test1', 'file': 'hdfs:///tmp/spark-examples-1.6.0-cdh5.16.1-hadoop2.6.0-cdh5.16.1.jar','args': ["10"]} r0 = requests.post(host + '/batches', data=json.dumps(data), headers=headers, auth=auth) r0.json() but unfortunately both ways return the error: <html> <head> <meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/> <title>Error 401 </title> </head> <body> <h2>HTTP ERROR: 401</h2> <p>Problem accessing /batches. Reason: <pre> Authentication required</pre></p> <hr /><a href="http://eclipse.org/jetty">Powered by Jetty:// 9.3.24.v20180605</a><hr/> </body> </html> PS¹: the cluster has kerberos PS²: the user that validated the ticket with "kinit" is the livy user that is created in AD (active directory)
... View more
08-16-2022
09:25 AM
hello cloudera community,
we have installed livy directly on a cloudera cluster host and we are now trying to connect to livy now using kerberos but we are getting the following error:
<html> <head> <meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/> <title>Error 401 </title> </head> <body> <h2>HTTP ERROR: 401</h2> <p>Problem accessing /sessions. Reason: <pre> Authentication required</pre></p> <hr /><a href="http://eclipse.org/jetty">Powered by Jetty:// 9.3.24.v20180605</a><hr/> </body> </html>
how can we make this connection to retest livy?
we are testing the connection with python3 script as follows:
import json, pprint, requests, textwrap
from requests_kerberos import HTTPKerberosAuth
host='http://localhost:8998'
data = {'kind': 'spark'}
headers = {'Requested-By': 'livy','Content-Type': 'application/json'}
auth=HTTPKerberosAuth()
r0 = requests.post(host + '/sessions', data=json.dumps(data), headers=headers,auth=auth)
r0.json()
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Cloudera Manager
08-10-2022
05:43 AM
good morning cloudera community, we are using apache ambari in version 2.6.2.2 and in this version we are using HDP 2.6.5, with that, we need to configure the parameter below in HDFS, but unfortunately we are not finding this option in the HDFS configuration, so we need to know how configure this parameter in HDFS? parameter: dfs.client.block.write.replace-datanode-on-failure.policy=ALWAYS PS: would it be in the "custom hdfs-site" option by clicking on "add property" and adding the parameter in the "properties" box?
... View more
Labels: