It doesn't matter if I try to use Spark in a Zeppelin notebook, with the spark-shell or spark-submit. The job stays at 3 executors and doesn't increase executors, even if the job is taking extremely long.
Is there a way I can test if dynamic allocation is activated? As of now it seems for me like it is not.
So you are able to go beyond 3 if you specify a higher minimum? Just checking, you have enough nodemanagers available to go up to 30? If you already have enough nodemanagers in your cluster, does increasing yarn.nodemanager.resource.memory-mb help?