Hello @loridigia
I don't think there is a direct way to achieve this. But we have a workaround to do that.
We can start the Spark jobs with Dynamic Allocation enabled. And we can set the Minimum executors to "0", initial executors to "1" and the idle timeout to "5s".
With these configurations, the Spark job will start with 1 executor and after 5 seconds that container will be killed as it will be idle for more than 5 seconds.
Now, we will have a Spark application only with the Driver / ApplicationMaster container running.
CONFIGS:
--conf spark.dynamicAllocation.enabled=true
--conf spark.shuffle.service.enabled=true
--conf spark.dynamicAllocation.executorIdleTimeout=5s
--conf spark.dynamicAllocation.initialExecutors=1
--conf spark.dynamicAllocation.maxExecutors=1
--conf spark.dynamicAllocation.minExecutors=1
NOTE:
We can add these configs to the spark-defaults.conf so that the changes will be applied to all the Running jobs.
Please be careful with other / actual Spark job configurations.
Make sure to mark the answer as the accepted solution. If it resolves your issue !