Member since
07-24-2015
2
Posts
0
Kudos Received
0
Solutions
05-17-2016
02:42 AM
When you say it is not working, what issue does it exhibit? For Hive on Spark you only need set the Execution Engine within Hive from MapReduce to Spark. You do need to consider Spark memory setting for Executors in the Spark Service and these must correlate to the YARN container memory settings. Generally I set the following YARN container settings: yarn.nodemanager.resource.memory-mb yarn.scheduler.maximum-allocation-mb To be the same value but greater than the Spark Executor Memory + Overhead . Check also for the following similar error in the YARN logs: 15/09/17 11:15:09 INFO yarn.Client: Verifying our application has not requested more than the maximum memory capability of the cluster (2211 MB per container) Exception in thread "main" java.lang.IllegalArgumentException: Required executor memory (2048+384 MB) is above the max threshold (2211 MB) of this cluster! Regards Shailesh
... View more