Hi everybody, i'm submitting jobs to a Yarn cluster via SparkLauncher.
Im under HDP 3.1.4.0
Now, i'd like to have only 1 executor for each job i run (since ofter i found 2 executor for each job) with the resources that i decide (of course if those resources are available in a machine).
So i tried to add
.setConf("spark.executor.instances", "1")
.setConf("spark.executor.cores", "3")
But even if i set spark.executor.instaces to 1, i have 2 executors, do you know why? (I read somewhere that the N° of executors = spark.executor.instances * spark.executor.cores .
I don't know if that's true, but it seems true.
Is there a way to achieve my goal of have MIN and MAX 1 executor for each job???
Could be achieve with dynamicAllocation (i'd prefer not to set that since it's not designed for that and can do a lot of stuff that i don't need) ? Thanks in advance!!!