And it doesnt work since keep saying that maximum is 3072MB. I tried on another cluster and actually changing yarn.nodemanager.resource.memory-mb and yarn.scheduler.maximum-allocation-mb to a lower value than spark.driver.memory i obtain the same error as above. So at this point i guess that Yarn (on my cluster) doesn't update the parameters values. I updated them from AMBARI and try to restart YARN many times but nothing changed.
Ideally in this case, increasing the yarn.scheduler.maximum-allocation-mbshould solve. But from your comments i can understand that the changes are not reflecting on the yarn service. To confirm the same you can check via this
It's reported as "INFO", but then it doesnt submit the app on Yarn, remain stuck.
INFO: 22/01/27 12:38:44 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (3072 MB per container),
INFO: Exception in thread "main" java.lang.IllegalArgumentException: Required AM memory (5214+521 MB) is above the max threshold (3072 MB) of this cluster! Please increase the value of 'yarn.scheduler.maximum-allocation-mb'.