Member since
08-07-2017
144
Posts
3
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1262 | 03-05-2019 12:48 AM | |
7758 | 11-06-2017 07:28 PM |
05-08-2019
12:02 AM
@cjervis , I have small doubt. What is the certifcation name CCAH or CCA131 or CCA500? Thanks,
... View more
03-05-2019
12:48 AM
1 Kudo
@cjervis, I forgot to do flush privileges after granting privileges. I did that and it worked Thanks,
... View more
02-27-2019
08:36 PM
Hello All, Issue is resolved. Thanks,
... View more
02-26-2019
11:19 PM
Hello All,
I am trying to add sentry service to the cloudera CDH 5.12 cluster.
As part of that I created database and user and granted all privileges to the user for the sentry database.
But I am getting below error.
Logon denied for user/password. Able to find the database server and database, but logon request was rejected.
Please suggest.
Thanks,
... View more
Labels:
- Labels:
-
Apache Sentry
-
Cloudera Manager
02-13-2019
08:18 PM
@Bimalc, Thanks for clarifying my doubt. I have one more doubt whether when we will be using flume, there will be physical memory usage increase or not across the flume agent host. Thanks,
... View more
02-13-2019
03:55 AM
Hello All,
As per my knowledge, there won't be any additional physical memory usage required by flume when running apart from while doing the installation and configuration of flume.
Flume is jut used as channel for data ingestion. Kafka does store the data into cluster whereas flume doesn't.
Please confirm.
Thanks,
... View more
Labels:
- Labels:
-
Apache Flume
-
Cloudera Manager
02-11-2019
04:35 AM
Hello All,
I have hadoop cluster set up and kerberos enabled with MIT type and cluster is having basic services like HDFS,yarn,zookeeper,oozie and hive.
I try to add service like hbase,flume ,sqoop and services got added successfully.
My question is after enabling kerberos, there won't be any difference for adding service through cloudera manager?
Please suggest.
Thanks,
... View more
Labels:
- Labels:
-
Cloudera Manager
-
Kerberos
02-06-2019
02:21 AM
Hello,
We are running spark thrift service to run hive queries. We are getting below error for spark thrift service and it is getting stopped after that.
19/02/05 23:26:45 ERROR yarn.ApplicationMaster: Uncaught exception:
org.apache.spark.rpc.RpcTimeoutException: Cannot receive any reply in 120 seconds. This timeout is controlled by spark.rpc.askTimeout
at org.apache.spark.rpc.RpcTimeout.org$apache$spark$rpc$RpcTimeout$$createRpcTimeoutException(RpcTimeout.scala:48)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:63)
at org.apache.spark.rpc.RpcTimeout$$anonfun$addMessageIfTimeout$1.applyOrElse(RpcTimeout.scala:59)
at scala.runtime.AbstractPartialFunction.apply(AbstractPartialFunction.scala:33)
at scala.util.Failure$$anonfun$recover$1.apply(Try.scala:185)
at scala.util.Try$.apply(Try.scala:161)
at scala.util.Failure.recover(Try.scala:185)
at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324)
at scala.concurrent.Future$$anonfun$recover$1.apply(Future.scala:324)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at org.spark-project.guava.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:293)
at scala.concurrent.impl.ExecutionContextImpl$$anon$1.execute(ExecutionContextImpl.scala:133)
at scala.concurrent.impl.CallbackRunnable.executeWithValue(Promise.scala:40)
at scala.concurrent.impl.Promise$DefaultPromise.tryComplete(Promise.scala:248)
at scala.concurrent.Promise$class.complete(Promise.scala:55)
at scala.concurrent.impl.Promise$DefaultPromise.complete(Promise.scala:153)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.Future$$anonfun$map$1.apply(Future.scala:235)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32) Can you please help me why this happens? Thanksm
... View more
Labels:
12-27-2018
09:38 PM
Hi satz, Thanks for inputs. Current value for Xmx heap size is 3gb. I will increase it to 5gb and observe whether we still face same error or not. Thanks, Priya
... View more
12-27-2018
09:17 PM
Hello satz, Thanks for inputs. Current heap size is 3gb. I will increase that to 5gb and observe for errors. Thanks, Priya
... View more
12-27-2018
05:03 AM
Hello All,
We are using spark thrift service to run hive queries. For one of the queries, we are getting OOM error GC overhead limit exceeded.
heap size of resurcemanger,nodemanager is 1gb each.
Please suggest.
Thanks,
Priya
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
-
Cloudera Manager
12-27-2018
05:00 AM
Hi Bhuv, I am connecting from beeline and I am logging as normal user not as root user. Thanks, Priya
... View more
12-23-2018
08:02 PM
Hi Bhuv, I added the parameter <property> <name>hive.server2.authentication</name> <value>NOSASL</value> </property> to the configuration file and I am getting the errors like below in the log. ERROR server.TThreadPoolServer: Thrift error occurred during processing of message. org.apache.thrift.protocol.TProtocolException: Missing version in readMessageBegin, old client? at org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:228) at org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:27) at org.apache.hive.service.auth.TSetIpAddressProcessor.process(TSetIpAddressProcessor.java:56) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:285) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Please suggest. Thanks, Priya
... View more
12-16-2018
11:06 PM
Hi Satz, Thanks for inputs. Currently I don't see any OutOfMemory or GC Overhead messages. I see messages like the one I mentioned above and in addition to that messages like below as well ERROR server.TransportRequestHandler: Error sending result RpcResponse{requestId=6993342906751026461, body=NioManagedBuffer{buf=java.nio.HeapByteBuffer[pos=0 lim=81 cap=81]}} to datanodes; closing connection Thanks, Priya
... View more
12-13-2018
04:25 AM
Hi, We also do have same issue. Can you please suggest on this? Thanks,
... View more
12-13-2018
03:27 AM
Hi satz, Thanks for inputs. Now we are getting below errors for spark thrift service. ERROR curator.ConnectionState: Connection timed out for connection string (zookeeper servers:2181) and timeout (15000) / elapsed (654234) org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss at org.apache.curator.ConnectionState.checkTimeouts(ConnectionState.java:198) at org.apache.curator.ConnectionState.getZooKeeper(ConnectionState.java:88) at org.apache.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:115) at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:793) at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:779) at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$400(CuratorFrameworkImpl.java:58) at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:265) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) and ERROR imps.CuratorFrameworkImpl: Background operation retry gave up org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:695) at org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:496) at org.apache.curator.framework.imps.BackgroundSyncImpl$1.processResult(BackgroundSyncImpl.java:50) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:609) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498) and ERROR cluster.YarnClientSchedulerBackend: Yarn application has already exited with state FAILED! Please suggest. Thanks, Priya
... View more
12-13-2018
03:23 AM
Hi, Thanks for reply. Now I am getting errors like below. ERROR curator.ConnectionState: Connection timed out for connection string (zookeeper servers:2181) and timeout (15000) / elapsed (564307) org.apache.curator.CuratorConnectionLossException: KeeperErrorCode = ConnectionLoss at org.apache.curator.ConnectionState.checkTimeouts(ConnectionState.java:198) at org.apache.curator.ConnectionState.getZooKeeper(ConnectionState.java:88) at org.apache.curator.CuratorZookeeperClient.getZooKeeper(CuratorZookeeperClient.java:115) at org.apache.curator.framework.imps.CuratorFrameworkImpl.performBackgroundOperation(CuratorFrameworkImpl.java:793) at org.apache.curator.framework.imps.CuratorFrameworkImpl.backgroundOperationsLoop(CuratorFrameworkImpl.java:779) at org.apache.curator.framework.imps.CuratorFrameworkImpl.access$400(CuratorFrameworkImpl.java:58) at org.apache.curator.framework.imps.CuratorFrameworkImpl$4.call(CuratorFrameworkImpl.java:265) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) and ERROR imps.CuratorFrameworkImpl: Background operation retry gave up org.apache.zookeeper.KeeperException$ConnectionLossException: KeeperErrorCode = ConnectionLoss at org.apache.zookeeper.KeeperException.create(KeeperException.java:99) at org.apache.curator.framework.imps.CuratorFrameworkImpl.checkBackgroundRetry(CuratorFrameworkImpl.java:695) at org.apache.curator.framework.imps.CuratorFrameworkImpl.processBackgroundOperation(CuratorFrameworkImpl.java:496) at org.apache.curator.framework.imps.BackgroundSyncImpl$1.processResult(BackgroundSyncImpl.java:50) at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:609) at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498) and then ERROR cluster.YarnClientSchedulerBackend: Yarn application has already exited with state FAILED! Please suggest. Thanks, Priya
... View more
12-10-2018
01:59 AM
Hello Bhuv, Thanks for your reply. I do have below parameter in hive-site.xml <property> <name>hive.server2.use.SSL</name> <value>false</value> </property> Do I need to add hive.server2.authentication parameter as well? In the log, I see below error as well. ERROR server.TransportRequestHandler: Error sending result RpcResponse{requestId=7118238470485701424, body=NioManagedBuffer{buf=java.nio.HeapByteBuffer[pos=0 lim=47 cap=47]}} to server, closing connection Can you please suggest? Thanks,
... View more
12-06-2018
05:13 AM
Hello,
We are running CDH 5.9.2 cluster. Kerberos is not enabled in our cluster. SSL/TLS not enabled for Hive. Users are connected through Power BI to hive.We are using spark thrift service to run hive queries.Our spark thrift service got stopped. In the log I see below error,
ERROR server.TThreadPoolServer: Error occurred during processing of message. java.lang.RuntimeException: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:219) at org.apache.thrift.server.TThreadPoolServer$WorkerProcess.run(TThreadPoolServer.java:269) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.thrift.transport.TSaslTransportException: No data or no sasl data in the stream at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:328) at org.apache.thrift.transport.TSaslServerTransport.open(TSaslServerTransport.java:41) at org.apache.thrift.transport.TSaslServerTransport$Factory.getTransport(TSaslServerTransport.java:216) ... 4 more
Please help.
Thanks,
Priya
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark
-
Cloudera Manager
11-28-2018
12:43 AM
Hello, We have CDH 5.9.2 and CM 5.9.2 cloudera hadoop cluster. We want to do CDH and CM upgrade actually, but our cloudera enterprise license has been expired.We don't have internet connectivity to our servers, so need to get things downloaded first. Can we still do the upgrade please? And We are planning to do upgrade from 5.9.2 to 5.12.2 CM upgrade first and then CDH 5.9.2 to CDH 5.12.2 upgrade. Please suggest. Thanks, Priya
... View more
Labels:
- Labels:
-
Cloudera Manager
11-23-2018
02:14 AM
Hello All,
We are running spark application and memory usage is increasing continuously after every run. I think it is beacuse of memory leakage. Can you please provide pointers for troubleshooting?
Thanks,
Priya
... View more
Labels:
- Labels:
-
Apache Spark
-
Cloudera Manager
11-23-2018
02:11 AM
Hi ramanhopes, I already had done that and it resolved the issue. Thanks for inputs.
... View more
11-13-2018
10:12 PM
Hello All,
Hbase rest server going down due to OOM error.Till now it was working fine.I am new to hbase. As per my understanding, hprof file is generated when server threw OOM error. And possible causes may be during compaction of hfiles or because of heavy writes.
Can anyone please help me to find out the reason.
Thanks,
Priya
... View more
Labels:
- Labels:
-
Apache HBase
-
Cloudera Manager
10-29-2018
03:23 AM
Hello All, We are running spark application. And frequently it is getting failed. In the log I see below message. exitCode: 11, (reason: Max number of executor failures (24) reached) And executor is getting failed with below error. Exit status: 1. Diagnostics: Exception from container-launch.
Container id: container_e14_15320282824
Exit code: 1
Stack trace: ExitCodeException exitCode=1:
at org.apache.hadoop.util.Shell.runCommand(Shell.java:601)
at org.apache.hadoop.util.Shell.run(Shell.java:504)
at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:786)
at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:213)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745) Is there any limit on no. of executor failures for spark application? I have specified no. of executors as 12.I don't see such parameter in cloudera manager though. Please suggest. As per my understanding, due to less memory,executors are getting failed an donce it reaches the max. limit, application is getting killed. We need to increase executor memory in this case. Kindly help. Thanks, Priya
... View more
Labels:
- Labels:
-
Cloudera Manager
10-12-2018
04:16 AM
Hello All, I observed in our cluster environment that one job is taking 100gb of memory and sometimes it is taking 2gb of memory(even though memory is available). Please suggest. Thanks, Priya
... View more
Labels:
- Labels:
-
Cloudera Manager
09-27-2018
03:52 AM
@Harsh J, Thanks for valuable inputs. So after setting checkpoint value to 1 hour, files will be deleted after 1 day +1 hour. Thanks, Priya
... View more
09-26-2018
09:24 PM
@Harsh J, We are using 5.9.2 version of cloudera manager. I don't see Filesystem Trash Checkpoint Interval parameter in Cloudera manager. Value for Filesystem Trash Interval is 1 day. Please suggest. Thanks, Priya
... View more
09-26-2018
09:10 PM
@Harsh J, We are using 5.9.2 version of cloudera manager. Value for "Filesystem Trash Checkpoint Interval" is 1 day.
... View more
09-26-2018
01:44 AM
Hi,
We have enbaled trash in our hadoop cluster and trash interval is 1 day.
But still we see lakhs of files in trash.
Can you please suggest ?
Thanks,
Priya
... View more
Labels:
- Labels:
-
Cloudera Manager
-
HDFS