Member since
08-07-2017
144
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2263 | 03-05-2019 12:48 AM | |
9405 | 11-06-2017 07:28 PM |
03-05-2019
12:48 AM
1 Kudo
@cjervis, I forgot to do flush privileges after granting privileges. I did that and it worked Thanks,
... View more
02-27-2019
08:36 PM
Hello All, Issue is resolved. Thanks,
... View more
02-26-2019
11:19 PM
Hello All,
I am trying to add sentry service to the cloudera CDH 5.12 cluster.
As part of that I created database and user and granted all privileges to the user for the sentry database.
But I am getting below error.
Logon denied for user/password. Able to find the database server and database, but logon request was rejected.
Please suggest.
Thanks,
... View more
Labels:
- Labels:
-
Apache Sentry
-
Cloudera Manager
02-06-2019
02:21 AM
Hello,
We are running spark thrift service to run hive queries. We are getting below error for spark thrift service and it is getting stopped after that.
19/02/05 23:26:45 ERROR yarn.ApplicationMaster: Uncaught exception:
org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply in 120 seconds. This timeout is controlled by spark.rpc.askTimeout
at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Failure.recover(Try.scala:185)
at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324)
at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at org.spark-project.guava.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)
at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:133)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
at scala.concurrent.Promise$class.complete(Promise.scala:55)
at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) Can you please help me why this happens? Thanksm
... View more
Labels:
12-27-2018
05:00 AM
Hi Bhuv, I am connecting from beeline and I am logging as normal user not as root user. Thanks, Priya
... View more
12-23-2018
08:02 PM
Hi Bhuv, I added the parameter <property> <name>hive.server2.authentication</name> <value>NOSASL</value> </property> to the configuration file and I am getting the errors like below in the log. ERROR server.TThreadPoolServer: Thrift error occurred during processing of message. org.apache.thrift.protocol.TProtocolException: Missing version in readMessageBegin, old client? at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:228) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Please suggest. Thanks, Priya
... View more
12-13-2018
03:23 AM
Hi, Thanks for reply. Now I am getting errors like below. ERROR curator.ConnectionState: Connection timed out for connection string (zookeeper servers:2181) and timeout (15000) / elapsed (564307) org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss at org.apache.curator.ConnectionState.checkTimeouts(ConnectionState.java:198) at org.apache.curator.ConnectionState.getZooKeeper(ConnectionState.java:88) at org.apache.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:115) at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:793) at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:779) at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$400(CuratorFrameworkImpl.java:58) at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:265) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) and ERROR imps.CuratorFrameworkImpl: Background operation retry gave up org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:695) at org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:496) at org.apache.curator.framework.imps.BackgroundSyncImpl$1.processResult(BackgroundSyncImpl.java:50) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:609) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498) and then ERROR cluster.YarnClientSchedulerBackend: Yarn application has already exited with state FAILED! Please suggest. Thanks, Priya
... View more
12-10-2018
01:59 AM
Hello Bhuv, Thanks for your reply. I do have below parameter in hive-site.xml <property> <name>hive.server2.use.SSL</name> <value>false</value> </property> Do I need to add hive.server2.authentication parameter as well? In the log, I see below error as well. ERROR server.TransportRequestHandler: Error sending result RpcResponse{requestId=7118238470485701424, body=NioManagedBuffer{buf=java.nio.HeapByteBuffer[pos=0 lim=47 cap=47]}} to server, closing connection Can you please suggest? Thanks,
... View more
12-06-2018
05:13 AM
Hello,
We are running CDH 5.9.2 cluster. Kerberos is not enabled in our cluster.SSL/TLS not enabled for Hive. Users are connected through Power BI to hive.We are using spark thrift service to run hive queries.Our spark thrift service got stopped. In the log I see below error,
ERROR server.TThreadPoolServer: Error occurred during processing of message. java.lang.RuntimeException: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:328) at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216) ... 4 more
Please help.
Thanks,
Priya
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
-
Cloudera Manager
10-29-2018
03:23 AM
Hello All, We are running spark application. And frequently it is getting failed. In the log I see below message. exitCode: 11, (reason: Max number of executor failures (24) reached) And executor is getting failed with below error. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_e14_15320282824
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:601)
at org.apache.hadoop.util.Shell.run(Shell.java:504)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:786)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:213)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745) Is there any limit on no. of executor failures for spark application? I have specified no. of executors as 12.I don't see such parameter in cloudera manager though. Please suggest. As per my understanding, due to less memory,executors are getting failed an donce it reaches the max. limit, application is getting killed. We need to increase executor memory in this case. Kindly help. Thanks, Priya
... View more
Labels:
- Labels:
-
Cloudera Manager