Member since
06-27-2017
4
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
21045 | 11-02-2017 04:55 AM |
09-19-2018
10:57 AM
1 Kudo
Our Hive is Hive 1.1.0-cdh5.14.2. But still uncorrelated subqueries in the WHERE clause is not working as per https://issues.apache.org/jira/browse/HIVE-784. Can you please help? Thanks!
... View more
Labels:
- Labels:
-
Apache Hive
11-20-2017
03:35 AM
Yes. Just in the hql file. Not anything in XML file
... View more
11-02-2017
04:55 AM
The error was a configuration issue. We need to either lower the executor memory (spark.executor.memory) and executor memory overhead (spark.yarn.executor.memoryOverhead) or increase the maximum memory allocation (yarn.scheduler.maximum-allocation-mb and yarn.nodemanager.resource.memory-mb) We can refer this link http://blog.cloudera.com/blog/2015/03/how-to-tune-your-apache-spark-jobs-part-2/ We tried changing all the combinations and the following properties gave the best result in our cluster: set hive.execution.engine=spark; set spark.executor.memory=4g; set yarn.nodemanager.resource.memory-mb=12288; set yarn.scheduler.maximum-allocation-mb=2048;
... View more
10-23-2017
05:19 AM
Hi All, We are getting the error while executing the hive queries with spark engine. Failed to execute spark task, with exception 'org.apache.hadoop.hive.ql.metadata.HiveException(Failed to create spark client.)' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask The following properties are set to use spark as the execution engine instead of mapreduce: set hive.execution.engine=spark; set spark.executor.memory=2g; I tried changing the following properties also. set yarn.scheduler.maximum-allocation-mb=2048; set yarn.nodemanager.resource.memory-mb=2048; set spark.executor.cores=4 set spark.executor.memory=4g; set spark.yarn.executor.memoryOverhead=750 set hive.spark.client.server.connect.timeout=900000ms;
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Spark