Hi @mravipati,
can you please check Dynamic Resource Allocation is enabled
spark.dynamicAllocation.enabled =true
this will use as many as it can depends up on the system rescue availability, this may be causing the problem
On the other note, this behaviour can be controlled by setting the
spark.dynamicAllocation.maxExecutors = <no max limit>
please note that, driver also allocated some of the containers. you need to manage the memory allocations for Executors and drivers.
for instance if you have Yarn minimum container size mentioned as 2GB and your executors are requested about 2GB per executor, this will allocated 4GB per executor as you have spark.yarn.executor.memoryOverhead also to be accounted.
the following KB explain more about the why it is taking more resources by spark.