Member since
02-28-2022
93
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
163 | 10-14-2022 07:06 AM | |
758 | 09-30-2022 06:32 AM |
10-18-2022
05:35 AM
hi @ask_bill_brooks when I run the command to list the interpreters available for zeppelin, the list returns that python is supported. I run the command: install-interpreter.sh --list and returns the following: OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/jars/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/jars/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] alluxio Alluxio interpreter angular HTML and AngularJS view rendering beam Beam interpreter bigquery BigQuery interpreter cassandra Cassandra interpreter built with Scala 2.11 elasticsearch Elasticsearch interpreter file HDFS file interpreter flink Flink interpreter built with Scala 2.11 hbase Hbase interpreter ignite Ignite interpreter built with Scala 2.11 jdbc Jdbc interpreter kylin Kylin interpreter lens Lens interpreter livy Livy interpreter md Markdown support pig Pig interpreter python Python interpreter scio Scio interpreter shell Shell command with this, I think it is possible to use the python interpreter. the problem is that we are not able to make the python interpreter work.
... View more
10-17-2022
01:02 PM
hello cloudera community,
we are having problem installing python interpreter on zeppelin
when running the command:
install-interpreter.sh --name python
loads the following information:
OpenJDK 64-bit server VM warning: ignoring MaxPermSize=512m option; support was removed in 8.0 SLF4J: The classpath contains multiple SLF4J bindings. SLF4J: Link found in [jar:file:/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/jars/slf4j-log4j12-1.7.30.jar!/org/slf4j/ impl/StaticLoggerBinder.class] SLF4J: Link found in [jar:file:/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/jars/slf4j-simple-1.7.30.jar!/org/slf4j/ impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: The actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] Install python(org.apache.zeppelin:zeppelin-python:0.8.0) in /opt/cloudera/parcels/CDH/lib/zeppelin/interpreter/python ...
stays at this message for almost 2 minutes and returns with the error:
OpenJDK 64-bit server VM warning: ignoring MaxPermSize=512m option; support was removed in 8.0 SLF4J: The classpath contains multiple SLF4J bindings. SLF4J: Link found in [jar:file:/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/jars/slf4j-log4j12-1.7.30.jar!/org/slf4j/ impl/StaticLoggerBinder.class] SLF4J: Link found in [jar:file:/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/jars/slf4j-simple-1.7.30.jar!/org/slf4j/ impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: The actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] Install python(org.apache.zeppelin:zeppelin-python:0.8.0) in /opt/cloudera/parcels/CDH/lib/zeppelin/interpreter/python ... org.sonatype.aether.RepositoryException: Unable to fetch dependencies for org.apache.zeppelin:zeppelin-python:0.8.0 in org.apache.zeppelin.dep.DependencyResolver.getArtifactsWithDep(DependencyResolver.java:179) in org.apache.zeppelin.dep.DependencyResolver.loadFromMvn(DependencyResolver.java:128) in org.apache.zeppelin.dep.DependencyResolver.load(DependencyResolver.java:76) in org.apache.zeppelin.dep.DependencyResolver.load(DependencyResolver.java:93) in org.apache.zeppelin.dep.DependencyResolver.load(DependencyResolver.java:85) in org.apache.zeppelin.interpreter.install.InstallInterpreter.install(InstallInterpreter.java:170) in org.apache.zeppelin.interpreter.install.InstallInterpreter.install(InstallInterpreter.java:134) in org.apache.zeppelin.interpreter.install.InstallInterpreter.install(InstallInterpreter.java:126) in org.apache.zeppelin.interpreter.install.InstallInterpreter.main(InstallInterpreter.java:278) Caused by: java.lang.NullPointerException in org.sonatype.aether.impl.internal.DefaultRepositorySystem.resolveDependencies(DefaultRepositorySystem.java:352) in org.apache.zeppelin.dep.DependencyResolver.getArtifactsWithDep(DependencyResolver.java:176) ... more 8
can help us solve this problem?
PS: zeppelin was installed by cloudera manager, version 7.6x, cdp version 7.1.x
... View more
Labels:
10-17-2022
09:14 AM
hello cloudera community,
I'm having trouble using python in zeppelin, when I run a simple script it returns the error below:
org.apache.thrift.transport.TTransportException: Socket is closed by peer. at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:130) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:455) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:354) at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:243) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77) at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.recv_createInterpreter(RemoteInterpreterService.java:182) at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.createInterpreter(RemoteInterpreterService.java:165) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter$2.call(RemoteInterpreter.java:169) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter$2.call(RemoteInterpreter.java:165) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.callRemoteFunction(RemoteInterpreterProcess.java:135) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.internal_create(RemoteInterpreter.java:165) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.open(RemoteInterpreter.java:132) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:299) at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:408) at org.apache.zeppelin.scheduler.Job.run(Job.java:188) at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:315) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
Could help to solve this problem?
PS¹: script simple:
%python
number1 = 2 number2 = 2 total = number1 + number2 print(total)
PS²: zeppelin was installed by cloudera manager, version 7.6x, cdp version 7.1.x
... View more
Labels:
- Labels:
-
Apache Zeppelin
10-14-2022
07:20 AM
hello cloudera community, we are having problem in impala after enabling kerberos on cdp cluster only the Impala StateStore role is started healthy, the other roles are in bad status checking the log of the Impala Catalog Server role, the following appears: ------------- 11:07:50.584 AM INFO cc:170 SASL message (Kerberos (internal)): GSSAPI client step 1 11:07:50.587 AM INFO cc:78 Couldn't open transport for hostname:24000 (No more data to read.) 11:07:50.587 AM INFO cc:94 Unable to connect to hostname:24000 11:07:50.587 AM INFO cc:274 statestore registration unsuccessful: Couldn't open transport for hostname:24000 (No more data to read.) 11:07:50.587 AM FATAL cc:87 Couldn't open transport for hostname:24000 (No more data to read.) . Impalad exiting. Wrote minidump to /var/log/impala-minidumps/catalogd/7ae4848b-cd34-4d4c-96cfeaa3-bd4a584f.dmp ------------- how can we solve this problem? this problem is urgent!
... View more
Labels:
- Labels:
-
Apache Impala
-
Cloudera Manager
10-14-2022
07:06 AM
I managed to solve. canary timeouts have been changed: ZooKeeper Canary Connection Timeout = 30s ZooKeeper Canary Session Timeout = 1m ZooKeeper Canary Operation Timeout = 30s with that, the error no longer presented and the status is healthy 100%
... View more
10-14-2022
06:23 AM
hello cloudera community we are having a problem with zookeeper on cdp after enabling kerberos on the cluster zookeeper instances are healthy (status green), but the general status of zookeeper shows the message: Canary test of client connection to ZooKeeper and execution of basic operations succeeded though a session could not be established with one or more servers how can we solve this problem?
... View more
Labels:
- Labels:
-
Apache Zookeeper
-
Cloudera Manager
09-30-2022
06:32 AM
hello cloudera community, solved the problem by pointing hive-site.xml file in spark and spark2 so spark jobs in livy in jupyter notebook ran successfully
... View more
09-29-2022
11:55 AM
hello cloudera community,
we are having problems using livy with job spark to read hive by jupyter notebook
when we run a simple query, for example:
"spark.sql("show show databases").show()"
returns the error below "org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.RuntimeException: Unable to instantiate org.apache.hadoop.hive.ql.metadata.SessionHiveMetaStoreClient"
Could you help us with this setup?
ps: we are using cdh 5.16.x
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive
-
Apache Spark
09-21-2022
01:00 PM
hello cloudera community, we are having a problem accessing the hive cli of a certain user and the spark-shell too when executing the query "show databases" in hive cli returns the error: "FAILED: SemanticException org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.transport.TTransportException: java.net.SocketException: Connection reset " when executing the "show databases" query in spark-shell returns the error: "org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.hadoop.hive.ql.metadata.HiveException: org.apache.thrift.transport.TTransportException;" "WARN metastore.RetryingMetaStoreClient: MetaStoreClient lost connection. Attempting to reconnect. org.apache.thrift.transport.TTransportException" when we use beeline and run the query "show databases" it works without problem could you help us with this problem? we are using cloudera manager 5.16.1 and cdh 5.16.1 the cluster is with kerberos and sentry is managing the permissions to the cluster databases
... View more
Labels:
09-15-2022
06:54 AM
hi @araujo the userPrincipalName of user livy is: livy/hostname_livy_server@DOMAIN.LOCAL the userPrincipalName of the livy-http user is: livy-http@DOMAIN.LOCAL running the command "kinit livy": running the command "kinit livy-http": running the "kinit" command with the keytab created for user livy: running the command "kinit" with the keytab created for the user livy-http: we've been facing this problem for months, we haven't found the solution yet.
... View more
09-12-2022
11:58 AM
hi @Scharan the same goes for sentry?
... View more
09-12-2022
10:55 AM
hi @Scharan so, does the ranger only work the permissions if it uses beeline or connections that access the hiveserver2 address?
... View more
09-12-2022
09:11 AM
we are using ambari 2.6.2.2 with hdp 2.6.5 the permissions are being managed by the ranger, but checking the ranger settings, it's only managing the /apps/hive/warehouse directory on the hdfs the problem we are having is in the /area/data/pll directory on the hdfs, this directory has acl rwx permission for the user using beeline to create an external table is returning the error: Error: Error while compiling statement: FAILED: HiveAccessControlException Permission denied: user [service_user] does not have [ALL] privilege on [hdfs://service1/area/data/pll] (state=42000,code=40000) but using hive cli does not return error, table is created successfully. What could be causing this problem?
... View more
Labels:
08-30-2022
05:41 AM
hi @araujo yes, we know the passwords, because we created these two users from scratch before creating the keytabs for the two users, we managed to kinit the two users without problem "kinit user", after creating the keytabs for the two users, kinit only works with the keytab, but it only works on the livy user, when we try to run kinit in livy-http user keytab displays the error "kinit: Preauthentication failed while getting initial credentials" the userprincipalname of each user is: livy: livy/hostname_livy_server@DOMAIN.LOCAL livy-http: HTTP/hostname_livy_server@DOMAIN.LOCAL
... View more
08-30-2022
05:32 AM
hi @araujo this command "SHOW ROLE GRANT GROUP group name" I know and already used, I thought there was a command that did the opposite. well I will check in mysql then and soon return if it worked.
... View more
08-29-2022
01:46 PM
hello cloudera community, how can we find out which group is tied to a sentry role by command line?
... View more
Labels:
08-29-2022
01:38 PM
hello cloudera community, could you help us with a problem regarding the /var/log directory at certain times (we couldn't identify the right moment) the /var/log directory is changing the permissions to 750 and with that the CDH 6.3.x cluster services is failing because it can't write the logs. Have you ever had a problem like this? did you manage to solve it? we had to leave a crontab running every 5 minutes to always change it to 755. ps: selinux is disabled ps: firewall is disabled
... View more
- Tags:
- cdh-6.3
- Permissions
Labels:
- Labels:
-
Apache Hadoop
-
Cloudera Manager
08-29-2022
06:33 AM
hi @araujo the ad has two users: livy livy-http the user livy has the SPN: livy/hostname@DOMAIN.LOCAL and it is working without problem in kinit the user livy-http has the SPN: HTTP/hostname@DOMAIN.LOCAL but it is showing the error described above
... View more
08-26-2022
05:29 AM
hi @araujo to use in the livy service, as requested in the processes in the links below: https://danielfrg.com/blog/2018/08/spark-livy/ https://enterprise-docs.anaconda.com/en/latest/admin/advanced/config-livy-server.html
... View more
08-23-2022
09:21 AM
hi @JQUIROS the command to create the entry was: add_entry -password -p HTTP/hostname@DOMAIN.LOCAL -k 1 -e rc4-hmac then export the keytab with the command: wkt http.keytab and then to validate the tiker the command: KRB5_TRACE=/dev/stdout kinit -kt http.keytab HTTP/hostname@DOMAIN.LOCAL presented the error: Getting initial credentials for HTTP/hostname@DOMAIN.LOCALLooked up etypes in keytab: rc4-hmac Sending unauthenticated request Sending request (237 bytes) to DOMAIN.LOCAL Sending initial UDP request to dgram 172.22.22.22:88 Received answer (229 bytes) from dgram 172.22.22.22:88 Response was from master KDC Received error from KDC: -1765328359/Additional pre-authentication required Preauthenticating using KDC method data Processing preauth types: PA-PK-AS-REQ (16), PA-PK-AS-REP_OLD (15), PA-ETYPE-INFO2 (19), PA-ENC-TIMESTAMP (2) Selected etype info: etype rc4-hmac, salt "", params "" Retrieving HTTP/hostname@DOMAIN.LOCAL from FILE:http.keytab (vno 0, enctype rc4-hmac) with result: 0/Success AS key obtained for encrypted timestamp: rc4-hmac/20C1 Encrypted timestamp (for 1661441475.76781): plain 301AA011199992303232BED, encrypted 3625254347B405C2739999992C5C50F451C0A477AE3AD421DF Preauth module encrypted_timestamp (2) (real) returned: 0/Success Produced preauth for next request: PA-ENC-TIMESTAMP (2) Sending request (313 bytes) to DOMAIN.LOCAL Sending initial UDP request to dgram 172.22.22.22:88 Received answer (196 bytes) from dgram 172.22.22.22:88 Response was from master KDC Received error from KDC: -1765328360/Preauthentication failed Preauthenticating using KDC method data Processing preauth types: PA-ETYPE-INFO2 (19) Selected etype info: etype rc4-hmac, salt "", params "" kinit: Preauthentication failed while getting initial credentials
... View more
08-23-2022
08:13 AM
hi @JQUIROS we were able to export the keytab with the command: write_kt http.keytab but when validating the ticket with the command: kinit -kt http.keytab HTTP/hostnamae@DOMAIN.LOCAL got the same error: kinit: Preauthentication failed while getting initial credentials
... View more
08-23-2022
08:06 AM
hi @JQUIROS using the ktutil command it was possible to create the principal: HTTP/hostname@DOMAIN.LOCAL how to export keytab now?
... View more
08-23-2022
07:47 AM
Hello cloudera community, we have the following problem: we are using powerbi with hortonworks odbc driver to connect to hive in cluster ambari 2.6.2.2, hdp 2.6.5 the connection is made successfully, but when making a query on a table that has 23 thousand rows, it returns the following error below: Erro do OLE DB ou do ODBC : [DataSource.Error] ERROR [HY000] [Hortonworks][Hardy] (35) Error from server: error code: '0' error message: 'Invalid OperationHandle: OperationHandle [opType=EXECUTE_STATEMENT, getHandleIdentifier()=0345789f-6c9a-4990-adf5-f823232338]'.. if we make a query putting a limiter of at most 10,000 thousand lines in the select, the result is ok what could be causing this problem? PS: there are queries in powerbi with the same ODBC in other tables that have more than 200 thousand rows and the result is OK too
... View more
Labels:
08-22-2022
12:51 PM
hi @JQUIROS we need to create the HTTP SPN keytab to use in the Livy service, as described in the link below: https://enterprise-docs.anaconda.com/en/latest/admin/advanced/config-livy-server.html in the link above, kadmin was used, but we don't have kadmin but AD.
... View more
08-22-2022
12:43 PM
hi @JQUIROS if create another keytab with the SPN below: "livy-http/hostname@DOMAIN.LOCAL" works, no problems. the problem is when using HTTP
... View more
08-22-2022
12:36 PM
hi@JQUIROS , should "kutil" command be run on cluster host or AD host?
... View more
08-22-2022
11:44 AM
hello cloudera community, we are trying to create a keytab with the main one: "HTTP/hostname@DOMAIN.LOCAL" with the command: ktpass -princ HTTP/hostname@DOMAIN.LOCAL -mapuser livy-http -crypto ALL -ptype KRB5_NT_PRINCIPAL -pass password2022 -target domain.local -out c:\temp\livy-http.keytab but I try to validate the ticket with this keytab returns the error: Exception: krb_error 24 Pre-authentication information was invalid (24) Pre-authentication information was invalid KrbException: Pre-authentication information was invalid (24) at sun.security.krb5.KrbAsRep.<init>(Unknown Source) at sun.security.krb5.KrbAsReqBuilder.send(Unknown Source) at sun.security.krb5.KrbAsReqBuilder.action(Unknown Source) at sun.security.krb5.internal.tools.Kinit.<init>(Unknown Source) at sun.security.krb5.internal.tools.Kinit.main(Unknown Source) Caused by: KrbException: Identifier doesn't match expected value (906) at sun.security.krb5.internal.KDCRep.init(Unknown Source) at sun.security.krb5.internal.ASRep.init(Unknown Source) at sun.security.krb5.internal.ASRep.<init>(Unknown Source) ... 5 more this user "livy-http" is already created in AD and with the SPN "HTTP/hostname@DOMAIN.LOCAL" attached to it what are we doing wrong?
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Cloudera Manager
08-19-2022
08:00 AM
hi @Datano checking the file with the same name as tablet_id in the "consensus-meta" directory, it shows that the file is 11K in all TS as you can see in the screenshot below, the tablet_id has different sizes between the 3 TS
... View more
08-19-2022
06:26 AM
oi @Datano 1 - this parameter was defined in the kudu settings on cloudera: default_num_replicas = 3 2 - below is the result of the "fsck" command you asked for: command did not return tablet_id size on TS hosts
... View more
08-19-2022
05:51 AM
hi @Deepan_N by running the command below directly in python3: r0.headers["www-authenticate"] returns the following error: Python 3.6.8 (default, Nov 16 2020, 16:55:22) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)] on linux Type "help", "copyright", "credits" or "license" for more information. >>> r0.headers["www-authenticate"] Traceback (most recent call last): File "<stdin>", line 1, in <module> NameError: name 'r0' is not defined >>> below is the screenshot of the commands executed in bash:
... View more