Support Questions

Find answers, ask questions, and share your expertise
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

Yarn: Required AM memory(5273+527) is above the max threshold (3072 MB)


Hello everyone, i have a problem i set from Ambari those values:

yarn.nodemanager.resource.memory-mb = 7820

yarn.scheduler.minimum-allocation-mb = 1024
yarn.scheduler.maximum-allocation-mb = 7820

I restart Yarn but the same error keeps coming, why i can't have a bigger AM memory than 3072? Where this 3072 MB comes from?? Should i edit some other settings, cause i cant found any?



Cloudera Employee

AM Memory is from this property 

You can set the am memory and tunning the value of the above property.


Make sure to mark the answer as the accepted solution. If it resolves your issue!


I'm not using MapReduce actually, i'm using Spark, so i'm submitting via Spark-Submit.
In fact MapReduce is at 1 GB but the error says 3072.

Is it possibile that modifing the Yarn values from Ambari doesn't actually update those values? Even after reboot? 

Cloudera Employee

Can you try passing this spark config on your spark shell /spark submit

Make sure to mark the answer as the accepted solution. If it resolves your issue ! 


Hi, since i'm in Cluster Mode i set 

spark.driver.memory = 6G

And it doesnt work since keep saying that maximum is 3072MB.
I tried on another cluster and actually changing yarn.nodemanager.resource.memory-mb and yarn.scheduler.maximum-allocation-mb to a lower value than spark.driver.memory i obtain the same error as above. So at this point i guess that Yarn (on my cluster) doesn't update the parameters values.
I updated them from AMBARI and try to restart YARN  many times but nothing changed.

Cloudera Employee

Ideally in this case, increasing the yarn.scheduler.maximum-allocation-mb  should solve. But from your comments i can understand that the changes are not reflecting on the yarn service. To confirm the same you can check via this 


Under this url search for yarn.scheduler.maximum-allocation-mb and check the value. 
Make sure the client configs are deployed via ambari. Check the status of YARN service in ambari.


Hi, thanks for the tip i checked but everything seems ok:


I upgraded some minutes ago at 15360 MB and in fact it's there.
But still i keep get that error.
Yarn it's all good, no error no warning nothing.

Cloudera Employee

Yes the config changes are getting reflected on the services as expected. Can you post the complete error ? 


It's reported as "INFO", but then it doesnt submit the app on Yarn, remain stuck.

INFO: 22/01/27 12:38:44 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (3072 MB per container),

INFO: Exception in thread "main" java.lang.IllegalArgumentException: Required AM memory (5214+521 MB) is above the max threshold (3072 MB) of this cluster! Please increase the value of 'yarn.scheduler.maximum-allocation-mb'.


I tried to set yarn.scheduler.capacity.maximum-am-resource-percent to 1 but doesnt changed a bit.
I added from Ambari under custom yarn-site.

Cloudera Employee

INFO: Exception in thread "main" java.lang.IllegalArgumentException: Required AM memory

Above error is for AM and not for executors, hence you need to set the AM memory as
Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.