Support Questions

Find answers, ask questions, and share your expertise

Spark Job Failing "Could not find or load main class org.apache.spark.deploy.yarn.ApplicationMaster"

avatar

Hi,

When I'm running Sample Spark Job in client mode it executing and when I run the same job in cluster mode it's failing. May I know the reason.

Client mode:

./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn-client --num-executors 1 --driver-memory 512m --executor-memory 512m --executor-cores 1 lib/spark-examples*.jar 10

Cluster mode:

./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --num-executors 3 --driver-memory 4g --executor-memory 2g --executor-cores 1 lib/spark-examples*.jar 10

Error message:

yarn logs -applicationId <applicationnumber> output:

Container: container_1466521315275_0219_02_000001 on hostname.domain.com_45454
==========================================================================================
LogType:stderr
Log Upload Time:Fri Jun 24 14:11:39 -0500 2016
LogLength:88
Log Contents:
Error: Could not find or load main class org.apache.spark.deploy.yarn.ApplicationMaster
End of LogType:stderr
LogType:stdout
Log Upload Time:Fri Jun 24 14:11:39 -0500 2016
LogLength:0
Log Contents:
End of LogType:stdout

Spark-default.conf file:

spark.driver.extraJavaOptions -Dhdp.version=2.3.2.0-2950
spark.history.kerberos.enabled true
spark.history.kerberos.keytab /etc/security/keytabs/spark.headless.keytab
spark.history.kerberos.principal spark-hdp@DOMAIN>COM
spark.history.provider org.apache.spark.deploy.yarn.history.YarnHistoryProvider
spark.history.ui.port 18080
spark.yarn.am.extraJavaOptions -Dhdp.version=2.3.2.0-2950
spark.yarn.containerLauncherMaxThreads 25
spark.yarn.driver.memoryOverhead 384
spark.yarn.executor.memoryOverhead 384
spark.yarn.historyServer.address sparkhistory.domain.com:18080
spark.yarn.max.executor.failures 3
spark.yarn.preserve.staging.files false
spark.yarn.queue default
spark.yarn.scheduler.heartbeat.interval-ms 5000
spark.yarn.services org.apache.spark.deploy.yarn.history.YarnHistoryService
spark.yarn.submit.file.replication 3

Any help is highly appreciated and thanks in advance.

1 ACCEPTED SOLUTION

avatar
Rising Star

@SBandaru

If you are using spark with hdp, then you have to do following things.

  1. Add these entries in your $SPARK_HOME/conf/spark-defaults.conf

    spark.driver.extraJavaOptions -Dhdp.version=2.2.0.0-2041 (your installed HDP version)

    spark.yarn.am.extraJavaOptions -Dhdp.version=2.2.0.0-2041 (your installed HDP version)

  2. create java-opts file in $SPARK_HOME/conf and add the installed HDP version in that file like

-Dhdp.version=2.2.0.0-2041 (your installed HDP version)

to know hdp version please run command hdp-select status hadoop-client in the cluster

View solution in original post

14 REPLIES 14

avatar

I was able to run your example on the Hortonworks 2.4 Sandbox (slightly newer version than your 2.3.2). However, it appears you have drastically increased the memory requirements between your 2 examples. You only allocate 512m to the driver and executor in "yarn-client" mode, but allocate 4g and 2g in second example, plus by requesting 3 executors, you will need over 10 GB RAM. Here is the command I actually ran to replicate the "cluster" deploy mode:

./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --num-executors 1 --driver-memory 1024m --executor-memory 1024m --executor-cores 1 lib/spark-examples*.jar 10

... and here is the result in the Yarn application logs:

Log Type: stdout
Log Upload Time: Fri Jun 24 21:19:42 +0000 2016
Log Length: 23
Pi is roughly 3.142752

Therefore, it is possible your job never was submitted to the run queue since it required too many resources. Please make sure it was not stuck in the 'ACCEPTED' state from the ResourceManager UI.

avatar

@Paul Hargis

Thanks for the quick response and appreciate for validating on your machine. I'm not running in a sandbox, I'm getting this error in the cluster which contains RAM of 256GB. Even below command gives me the same error message:

./bin/spark-submit --class org.apache.spark.examples.SparkPi--master yarn --deploy-mode cluster --num-executors 1--driver-memory 1024m--executor-memory 1024m--executor-cores 1 lib/spark-examples*.jar 10 

avatar

@Sri Bandaru

Okay, so now I'm wondering if you should include the Spark assembly jar; that is where the reference class lives. Can you try adding this reference to your command-line (assuming your current directory is the spark-client directory, or $SPARK_HOME for your installation):

--jars lib/spark-assembly-1.6.0.2.4.0.0-169-hadoop2.7.1.2.4.0.0-169.jar

Note: If running on HDP, you can use the soft-link to this file named "spark-hdp-assembly.jar"

avatar

@Paul Hargis

Thank's Paul, No luck

I'm using Spark -1.4 (HDP 2.3)

avatar
Rising Star

you can add the spark assembly jar in the global location like hdfs:/// .

And set the spark.yarn.jar value in spark-defaults.conf to that spark assembly jar in hdfs path.

avatar
Super Guru

@Sri Bandaru

since you are not running in a sandbox, what does --master yarn resolves to?

avatar
New Contributor

Sorry if this is obvious, but is Spark installed on all the cluster nodes? If that isn't working, try adding the spark jars to the spark.executor.extraClassPath system variable in spark-defaults.conf

avatar

@Adam Davidson

Thanks for the response. Yes, Spark is installed on all the machines.

./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --executor-memory 2g --num-executors 1 -driver-memory 1024m --executor-memory 1024m --files /usr/hdp/current/spark-client/conf/hive-site.xml --jars /usr/hdp/current/spark-client/lib/datanucleus-api-jdo-3.2.6.jar,/usr/hdp/current/spark-client/lib/datanucleus-rdbms-3.2.9.jar,/usr/hdp/current/spark-client/lib/datanucleus-core-3.2.10.jar lib/spark-examples*.jar 10 

Even if I run above command it's throwing me the same error.

avatar
Rising Star

Can you check your, spark-env.sh file.

Make sure you set the HADOOP_CONF_DIR and JAVA_HOME in the spark-env.sh too