Support Questions

Find answers, ask questions, and share your expertise

My spark job is utilising all V cores configured for Yarn queue: New jobs failing

avatar
New Contributor

My Yarn queue has total vcores configured as150 and memory of 850 gb.My spark job is utilising all the V cores available which is 150, but only 1/3rd of total memory is currently utilised: however when new jobs are started they are failing  with message - unable to allocate yarn resources.

how can I reduce my vcores allocation for spark job.

1 REPLY 1

avatar
Cloudera Employee

@Vivekaushik 
Vcores can be controlled for each job via spark parameters --executor-cores & --driver-cores by hardcoding in your custom code Ex-1 (or) as shown in the below example  (or) in Spark service safety valves of spark-defaults.conf

Ex-1:  spark.conf.set("spark.executor.cores","4")  or  .config("spark.executor.cores","4")

Ex-2 : spark-submit --executor-cores 4 --driver-cores 2 --num-executors 5 --queue xyz

The above spark job creates Yarn app_id and occupies a total of 5*4 + 2 = 22 cores


Note: 
1. Parameters detailed in code will be taken 1st preference and overwrites that are passed in spark-submit & default taken from Spark service safety valves
2. Parameters passed via spark-submit taken as 2nd preference and overwrites spark safety valves
3. If not defined anywhere then Spark safety valves will be the default ones
4. Check whether the dynamic allocation is enabled for this job that uses all vcores in a queue than memory and blocks resources.  These parameters depend on "--executor-cores"  and dynamically allocate executors as required that use Vcores as passed and claim a total of 22 vcores on max usage
"--conf spark.dynamicAllocation.enabled=true"
"--conf spark.dynamicAllocation.maxExecutors=5"

Hope this clarifies. your query.  If you found this response assisted with your query, please take a moment to log in and click on KUDOS & ”Accept as Solution" below this post.