Member since
02-28-2022
144
Posts
13
Kudos Received
14
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
100 | 10-18-2024 12:29 PM | |
1957 | 09-05-2024 09:06 AM | |
2053 | 08-08-2024 05:10 AM | |
1762 | 05-16-2024 05:33 AM | |
557 | 04-22-2024 10:24 AM |
06-21-2023
01:08 PM
I managed to fix the problem. was removed the "Navigator Metadata Server" service from the cluster and added it again.
... View more
05-30-2023
11:14 AM
hello cloudera community Is it possible to know the date and time that a kudu table was accessed/read? i know there is "transient_lastDdlTime" field in kudu table but that field is updated only when the table is updated.
... View more
Labels:
- Labels:
-
Apache Kudu
-
Cloudera Manager
05-30-2023
11:09 AM
OK! @Juanes 😉 thanks for the clarification.
... View more
01-20-2023
08:06 AM
Invalid OperationHandle: OperationHandle This exception occurs when there are multiple HiveServer2 instances and access them using Zookeeper/Knox with failover configured When a query(irrespective of number of rows) took more time and HS2 is not able to respond within the defined timeout, ZK/KNOX will do a failover to the next available HS2 Since the other HS2 is unaware of the Query/Operation Handle, it throws Invalid OperationHandle exception To solve this problem Check if we can optimize the query to run faster either by adding a filter or splitting the available data into multiple tables and then query them in separate queries etc Check if HS2 is utilized beyond its capacity like using 200 connections at a given point in time for a 24GB heap of HS2/HMS HMS backend database not able to cope up to serve requests from HMS Check yarn queue has enough capacity to serve the query otherwise query will be in waiting state Check if HDFS is healthy and Namenode is able to respond to the requests without delays Sometimes if Ranger needs to check too many files/directories in HDFS before the query gets executed If Load balancer is used, sticky sessions should be enabled so that one-one relationship gets established for opened connections avoiding failover to another HS2 instance The above explanation holds good for any version of Hive
... View more
01-12-2023
01:22 PM
There can be several reasons that can cause this. Run host inspector to get a better understanding of the issue: CM -> Hosts -> All Hosts -> Inspect Hosts
... View more
10-18-2022
05:35 AM
hi @ask_bill_brooks when I run the command to list the interpreters available for zeppelin, the list returns that python is supported. I run the command: install-interpreter.sh --list and returns the following: OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/jars/slf4j-log4j12-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/cloudera/parcels/CDH-7.1.7-1.cdh7.1.7.p1000.24102687/jars/slf4j-simple-1.7.30.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] alluxio Alluxio interpreter angular HTML and AngularJS view rendering beam Beam interpreter bigquery BigQuery interpreter cassandra Cassandra interpreter built with Scala 2.11 elasticsearch Elasticsearch interpreter file HDFS file interpreter flink Flink interpreter built with Scala 2.11 hbase Hbase interpreter ignite Ignite interpreter built with Scala 2.11 jdbc Jdbc interpreter kylin Kylin interpreter lens Lens interpreter livy Livy interpreter md Markdown support pig Pig interpreter python Python interpreter scio Scio interpreter shell Shell command with this, I think it is possible to use the python interpreter. the problem is that we are not able to make the python interpreter work.
... View more
10-17-2022
09:14 AM
hello cloudera community,
I'm having trouble using python in zeppelin, when I run a simple script it returns the error below:
org.apache.thrift.transport.TTransportException: Socket is closed by peer. at org.apache.thrift.transport.TIOStreamTransport.read(TIOStreamTransport.java:130) at org.apache.thrift.transport.TTransport.readAll(TTransport.java:86) at org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:455) at org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:354) at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:243) at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:77) at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.recv_createInterpreter(RemoteInterpreterService.java:182) at org.apache.zeppelin.interpreter.thrift.RemoteInterpreterService$Client.createInterpreter(RemoteInterpreterService.java:165) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter$2.call(RemoteInterpreter.java:169) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter$2.call(RemoteInterpreter.java:165) at org.apache.zeppelin.interpreter.remote.RemoteInterpreterProcess.callRemoteFunction(RemoteInterpreterProcess.java:135) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.internal_create(RemoteInterpreter.java:165) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.open(RemoteInterpreter.java:132) at org.apache.zeppelin.interpreter.remote.RemoteInterpreter.getFormType(RemoteInterpreter.java:299) at org.apache.zeppelin.notebook.Paragraph.jobRun(Paragraph.java:408) at org.apache.zeppelin.scheduler.Job.run(Job.java:188) at org.apache.zeppelin.scheduler.RemoteScheduler$JobRunner.run(RemoteScheduler.java:315) at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$201(ScheduledThreadPoolExecutor.java:180) at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:293) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
Could help to solve this problem?
PS¹: script simple:
%python
number1 = 2 number2 = 2 total = number1 + number2 print(total)
PS²: zeppelin was installed by cloudera manager, version 7.6x, cdp version 7.1.x
... View more
Labels:
- Labels:
-
Apache Zeppelin
10-14-2022
07:06 AM
I managed to solve. canary timeouts have been changed: ZooKeeper Canary Connection Timeout = 30s ZooKeeper Canary Session Timeout = 1m ZooKeeper Canary Operation Timeout = 30s with that, the error no longer presented and the status is healthy 100%
... View more
09-30-2022
06:32 AM
hello cloudera community, solved the problem by pointing hive-site.xml file in spark and spark2 so spark jobs in livy in jupyter notebook ran successfully
... View more
09-22-2022
04:16 AM
@yagoaparecidoti It looks like this particular user does not have permission to connect to HMS. You can add this user or put "*" under this configuration : CM->Hive->Configuration->Search "hive_proxy_user_groups_list" And then restart hive and then run "show databases" . Another possibility could be that it is not able to locate hive-site.xml on the node required to connect to Hive Metastore.
... View more