Support Questions

Find answers, ask questions, and share your expertise
Celebrating as our community reaches 100,000 members! Thank you!

Executing Spark-submit with yarn-cluster mode and got OOM in driver with HiveContext


I am trying to submit the spark-sql scala code with yarn-cluster mode and got OOM exception in driver .

command used :

spark-submit --class Test.App --verbose --master yarn-cluster --num-executors 2 --driver-memory 5000m --executor-memory 5000m --executor-cores 2 --driver-cores 2 --conf spark.yarn.driver.memoryOverhead=1024 --conf spark.driver.maxResultSize=5g --driver-java-options "-XX:MaxPermSize=1000m" --conf spark.yarn.jar=hdfs://hdfspath/oozie/spark-assembly- --jars hdfs://hdfspath/oozie/datanucleus-api-jdo-3.2.6.jar,hdfs://hdfs//oozie/datanucleus-core-3.2.10.jar,hdfs://hdfspath/datanucleus-rdbms-3.2.9.jar,hdfs://hdfs/oozie/mysql-connector-java.jar,hdfs://hdfspath/share/lib/hive/tez-api-,hdfs://hdfspath/share/lib/hive/tez-dag- --conf spark.driver.extraJavaOptions="-XX:MaxPermSize=1120m",hive.metastore.uris=thrift://testip:9083,hive.server2.thrift.http.port=10001,hive.server2.thrift.port=10000 --driver-java-options "-Djavax.jdo.option.ConnectionURL=jdbc:mysql://testip/hive?createDatabaseIfNotExist=true -Dhive.metastore.uris=thrift://testip:9083 -Dhive.server2.thrift.port=10000 -Dhive.metastore.warehouse.dir=/apps/hive/warehouse" --files hdfs://hdfspath/oozie/hive-tez-site.xml --driver-class-path hive-tez-site.xml hdfs://hdfspath/oozie/Test.jar 2016-04-11

Error details :

16/04/12 18:25:19 INFO hive.HiveContext: default warehouse location is /apps/hive/warehouse
16/04/12 18:25:19 INFO hive.HiveContext: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
16/04/12 18:25:19 INFO client.ClientWrapper: Inspected Hadoop version: 2.2.0
16/04/12 18:25:19 INFO client.ClientWrapper: Loaded org.apache.hadoop.hive.shims.Hadoop23Shims for Hadoop version 2.2.0
16/04/12 18:25:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/04/12 18:25:20 INFO hive.metastore: Trying to connect to metastore with URI thrift://
16/04/12 18:25:20 INFO hive.metastore: Connected to metastore.
Exception in thread "Driver" 
Exception: java.lang.OutOfMemoryError thrown from the UncaughtExceptionHandler in thread "Driver"
16/04/12 18:25:24 INFO spark.SparkContext: Invoking stop() from shutdown hook
16/04/12 18:25:24 INFO history.YarnHistoryService: Application end event: SparkListenerApplicationEnd(1460499924685)

The same code works fine in spark-submit with yarn-client mode .

I am getting this exception while using HiveContext only.

Thanks in advance.



Hello Nelson

Instead of putting the Hive info in different properties could you try to add the hive-site.xml : (--files=/etc/hive/conf/hive-site.xml) just to make sure all is consistent. Without this spark could launch a embedded metastore causing the out of memory condition.

Could you also share a little bit the app , what type of data ORC,CSV etc... Size of he table

let's see if this helps


Thanks for your response .

I am trying with below parameters and still getting the same OOM .

--files /etc/hive/conf/hive-site.xml --files /etc/tez/conf/tez-site.xml --driver-class-path hive-site.xml,tez-site.xml

--conf spark.driver.extraJavaOptions="-XX:MaxPermSize=1120m" --driver-java-options "-Djavax.jdo.option.ConnectionURL=jdbc:mysql://testip/hive?createDatabaseIfNotExist=true -Dhive.metastore.uris=thrift://testip:9083 "

Table details :

File Format : Text File / External Table

Size : 10 MB


Hello nelson I don't think you need the Hive configuration explicitly set anymore. aka this part

"-Djavax.jdo.option.ConnectionURL=jdbc:mysql://testip/hive?createDatabaseIfNotExist=true -Dhive.metastore.uris=thrift://testip:9083 "


Spark allocates memory based on option parameters, which can be passed in multiple ways:

1) via the command-line (as you do)

2) via programmatic instructions

3) via the "spark-defaults.conf" file in the "conf" directory under your $SPARK_HOME

Second, there are separate config params for the driver and the executors. This is important, because the main difference between "yarn-client" and "yarn-cluster" mode is where the Driver lives (either on the client, or on cluster within the AppMaster). Therefore, we should look at your driver config parameters.

It looks like these are your driver-related options from the command-line:

--driver-memory 5000m 
--driver-cores 2 
--conf spark.yarn.driver.memoryOverhead=1024 
--conf spark.driver.maxResultSize=5g 
--driver-java-options "-XX:MaxPermSize=1000m"

It is possible that the AppMaster is running on a node that does not have enough memory to support your option requests, e.g. that the sum of driver-memory (5G) and PermSize (1G), plus overhead (1G) does not fit on the node. I would try lowering the --driver-memory by 1G steps until you no longer get the OOM error.


Thanks for your response .

Even I lowered all the memory as 1 GB and still got same OOM Error .

Issue happens only for HiveContext but works fine with SparkContext, SQLContext.

command :

--driver-memory 1g --executor-memory 1g --executor-cores 2 --driver-cores 2 --conf spark.yarn.driver.memoryOverhead=200 --driver-java-options "-XX:MaxPermSize=128m"


Thank you All.

Resolved the issue by changing /etc/hive/conf/hive-site.xml to /usr/hdp/current/spark-client/conf/hive-site.xml file .

--files /usr/hdp/current/spark-client/conf/hive-site.xml,/etc/tez/conf/tez-site.xml

Running Command :

spark-submit --class Test.App --verbose --master yarn-cluster --num-executors 2 --driver-memory 4g --executor-memory 4g --executor-cores 2 --driver-cores 2 --conf spark.yarn.jar=hdfs://hdfspath/oozie/spark-assembly- --files /usr/hdp/current/spark-client/conf/hive-site.xml,/etc/tez/conf/tez-site.xml --jars hdfs://hdfspath/oozie/datanucleus-api-jdo-3.2.6.jar,hdfs://hdfs//oozie/datanucleus-core-3.2.10.jar,hdfs://hdfspath/datanucleus-rdbms-3.2.9.jar,hdfs://hdfs/oozie/mysql-connector-java.jar,hdfs://hdfspath/share/lib/hive/tez-api-,hdfs://hdfspath/share/lib/hive/tez-dag- hdfs://hdfspath/oozie/Test.jar 2016-04-11