My Yarn queue has total vcores configured as150 and memory of 850 gb.My spark job is utilising all the V cores available which is 150, but only 1/3rd of total memory is currently utilised: however when new jobs are started they are failing with message - unable to allocate yarn resources.
how can I reduce my vcores allocation for spark job.