Support Questions
Find answers, ask questions, and share your expertise

Spark dynamic allocation not working

Spark dynamic allocation not working

Hey there,

I'm currently trying to get Spark dynamic allocation to work, but sadly it doesn't work. I configured following properties like documented in the documentation:

spark.dynamicAllocation.enabledtrue
spark.dynamicAllocation.initialExecutors3
spark.dynamicAllocation.minExecutors3
spark.dynamicAllocation.maxExecutors30
yarn.nodemanager.aux-services.spark2_shuffle.classpath{{stack_root}}/${hdp.version}/spark2/aux/*
yarn.nodemanager.aux-servicesmapreduce_shuffle,spark2_shuffle,{{timeline_collector}}
yarn.nodemanager.aux-services.spark2_shuffle.classorg.apache.spark.network.yarn.YarnShuffleService

It doesn't matter if I try to use Spark in a Zeppelin notebook, with the spark-shell or spark-submit. The job stays at 3 executors and doesn't increase executors, even if the job is taking extremely long.

Is there a way I can test if dynamic allocation is activated? As of now it seems for me like it is not.

Best regards,

Markus

1 REPLY 1

Re: Spark dynamic allocation not working

@Markus Wilhelm

So you are able to go beyond 3 if you specify a higher minimum? Just checking, you have enough nodemanagers available to go up to 30? If you already have enough nodemanagers in your cluster, does increasing yarn.nodemanager.resource.memory-mb help?