Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Spark max number of executor to 1

avatar
Contributor

Hi everybody, i'm submitting jobs to a Yarn cluster via SparkLauncher.
Im under HDP 3.1.4.0
Now, i'd like to have only 1 executor for each job i run (since ofter i found 2 executor for each job) with the resources that i decide (of course if those resources are available in a machine).
So i tried to add 

.setConf("spark.executor.instances", "1")
.setConf("spark.executor.cores", "3")

But even if i set spark.executor.instaces to 1, i have 2 executors, do you know why? (I read somewhere that the N° of executors =  spark.executor.instances * spark.executor.cores .
I don't know if that's true, but it seems true.
Is there a way to achieve my goal of have MIN and MAX 1 executor for each job???
Could be achieve with dynamicAllocation (i'd prefer not to set that since it's not designed for that and can do a lot of stuff that i don't need) ? Thanks in advance!!!

2 REPLIES 2

avatar
Contributor

Hello @loridigia,

I have tried to run the sample job like below and I see only one executor and one Driver container.

# cd /usr/hdp/current/spark2-client
# su spark

$ ./bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --num-executors 1 --driver-memory 512m --executor-memory 512m --executor-cores 2 examples/jars/spark-examples*.jar 10000

 

Even the spark-shell is also limiting the containers to one when we are mentioning the --num-executors 1

$ spark-shell --num-executors 1

 

- What is the spark-submit command you are trying to run?
- Are you seeing the same issue with the above sample job?

avatar
Super Collaborator

Hi @loridigia 

 

If cluster/application is not enabled dynamic allocation and if you set --conf spark.executor.instances=1 then it will launch only 1 executor. Apart from executor, you will see AM/driver in the Executor tab Spark UI.