Created 01-26-2022 04:32 AM
Hello everyone, i have a problem i set from Ambari those values:
yarn.nodemanager.resource.memory-mb = 7820
yarn.scheduler.minimum-allocation-mb = 1024
yarn.scheduler.maximum-allocation-mb = 7820
I restart Yarn but the same error keeps coming, why i can't have a bigger AM memory than 3072? Where this 3072 MB comes from?? Should i edit some other settings, cause i cant found any?
Thanks
Created 01-26-2022 09:37 PM
AM Memory is from this property yarn.app.mapreduce.am.resource.mb
You can set the am memory and tunning the value of the above property.
Make sure to mark the answer as the accepted solution. If it resolves your issue!
Created 01-27-2022 01:18 AM
I'm not using MapReduce actually, i'm using Spark, so i'm submitting via Spark-Submit.
In fact MapReduce yarn.app.mapreduce.am.resource.mb is at 1 GB but the error says 3072.
Is it possibile that modifing the Yarn values from Ambari doesn't actually update those values? Even after reboot?
Created 01-27-2022 01:59 AM
Can you try passing this spark config on your spark shell /spark submit
spark.yarn.am.memory=1g
Make sure to mark the answer as the accepted solution. If it resolves your issue !
Created 01-27-2022 02:10 AM
Hi, since i'm in Cluster Mode i set
spark.driver.memory = 6G
And it doesnt work since keep saying that maximum is 3072MB.
I tried on another cluster and actually changing yarn.nodemanager.resource.memory-mb and yarn.scheduler.maximum-allocation-mb to a lower value than spark.driver.memory i obtain the same error as above. So at this point i guess that Yarn (on my cluster) doesn't update the parameters values.
I updated them from AMBARI and try to restart YARN many times but nothing changed.
Created 01-27-2022 02:22 AM
Ideally in this case, increasing the yarn.scheduler.maximum-allocation-mb should solve. But from your comments i can understand that the changes are not reflecting on the yarn service. To confirm the same you can check via this
http://active_rm_hostname:8088/conf
Under this url search for yarn.scheduler.maximum-allocation-mb and check the value.
Make sure the client configs are deployed via ambari. Check the status of YARN service in ambari.
Created 01-27-2022 02:35 AM
Hi, thanks for the tip i checked but everything seems ok:
Created 01-27-2022 03:57 AM
Yes the config changes are getting reflected on the services as expected. Can you post the complete error ?
Created on 01-27-2022 04:41 AM - edited 01-27-2022 04:41 AM
It's reported as "INFO", but then it doesnt submit the app on Yarn, remain stuck.
INFO: 22/01/27 12:38:44 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (3072 MB per container),
INFO: Exception in thread "main" java.lang.IllegalArgumentException: Required AM memory (5214+521 MB) is above the max threshold (3072 MB) of this cluster! Please increase the value of 'yarn.scheduler.maximum-allocation-mb'.
Created 01-27-2022 05:05 AM
I tried to set yarn.scheduler.capacity.maximum-am-resource-percent to 1 but doesnt changed a bit.
I added from Ambari under custom yarn-site.