Member since
06-08-2020
3
Posts
0
Kudos Received
0
Solutions
03-14-2024
02:35 PM
@Vivekaushik Vcores can be controlled for each job via spark parameters --executor-cores & --driver-cores by hardcoding in your custom code Ex-1 (or) as shown in the below example (or) in Spark service safety valves of spark-defaults.conf Ex-1: spark.conf.set("spark.executor.cores","4") or .config("spark.executor.cores","4") Ex-2 : spark-submit --executor-cores 4 --driver-cores 2 --num-executors 5 --queue xyz The above spark job creates Yarn app_id and occupies a total of 5*4 + 2 = 22 cores Note: 1. Parameters detailed in code will be taken 1st preference and overwrites that are passed in spark-submit & default taken from Spark service safety valves 2. Parameters passed via spark-submit taken as 2nd preference and overwrites spark safety valves 3. If not defined anywhere then Spark safety valves will be the default ones 4. Check whether the dynamic allocation is enabled for this job that uses all vcores in a queue than memory and blocks resources. These parameters depend on "--executor-cores" and dynamically allocate executors as required that use Vcores as passed and claim a total of 22 vcores on max usage "--conf spark.dynamicAllocation.enabled=true" "--conf spark.dynamicAllocation.maxExecutors=5" Hope this clarifies. your query. If you found this response assisted with your query, please take a moment to log in and click on KUDOS & ”Accept as Solution" below this post.
... View more