Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Yarn: Required AM memory(5273+527) is above the max threshold (3072 MB)

avatar
Contributor

Hello everyone, i have a problem i set from Ambari those values:

yarn.nodemanager.resource.memory-mb = 7820

yarn.scheduler.minimum-allocation-mb = 1024
yarn.scheduler.maximum-allocation-mb = 7820

I restart Yarn but the same error keeps coming, why i can't have a bigger AM memory than 3072? Where this 3072 MB comes from?? Should i edit some other settings, cause i cant found any?

Thanks

10 REPLIES 10

avatar
Contributor

AM Memory is from this property  yarn.app.mapreduce.am.resource.mb 

You can set the am memory and tunning the value of the above property.

 

Make sure to mark the answer as the accepted solution. If it resolves your issue!

avatar
Contributor

I'm not using MapReduce actually, i'm using Spark, so i'm submitting via Spark-Submit.
In fact MapReduce yarn.app.mapreduce.am.resource.mb is at 1 GB but the error says 3072.

Is it possibile that modifing the Yarn values from Ambari doesn't actually update those values? Even after reboot? 

avatar
Contributor

Can you try passing this spark config on your spark shell /spark submit 

spark.yarn.am.memory=1g

Make sure to mark the answer as the accepted solution. If it resolves your issue ! 

avatar
Contributor

Hi, since i'm in Cluster Mode i set 

spark.driver.memory = 6G

And it doesnt work since keep saying that maximum is 3072MB.
I tried on another cluster and actually changing yarn.nodemanager.resource.memory-mb and yarn.scheduler.maximum-allocation-mb to a lower value than spark.driver.memory i obtain the same error as above. So at this point i guess that Yarn (on my cluster) doesn't update the parameters values.
I updated them from AMBARI and try to restart YARN  many times but nothing changed.

avatar
Contributor

Ideally in this case, increasing the yarn.scheduler.maximum-allocation-mb  should solve. But from your comments i can understand that the changes are not reflecting on the yarn service. To confirm the same you can check via this 

http://active_rm_hostname:8088/conf

Under this url search for yarn.scheduler.maximum-allocation-mb and check the value. 
Make sure the client configs are deployed via ambari. Check the status of YARN service in ambari.

avatar
Contributor

Hi, thanks for the tip i checked but everything seems ok:

<property>
<name>yarn.scheduler.maximum-allocation-mb</name>
<value>15360</value>
</property>

I upgraded some minutes ago at 15360 MB and in fact it's there.
But still i keep get that error.
Yarn it's all good, no error no warning nothing.

avatar
Contributor

Yes the config changes are getting reflected on the services as expected. Can you post the complete error ? 

avatar
Contributor

It's reported as "INFO", but then it doesnt submit the app on Yarn, remain stuck.

INFO: 22/01/27 12:38:44 INFO Client: Verifying our application has not requested more than the maximum memory capability of the cluster (3072 MB per container),

INFO: Exception in thread "main" java.lang.IllegalArgumentException: Required AM memory (5214+521 MB) is above the max threshold (3072 MB) of this cluster! Please increase the value of 'yarn.scheduler.maximum-allocation-mb'.

avatar
Contributor

I tried to set yarn.scheduler.capacity.maximum-am-resource-percent to 1 but doesnt changed a bit.
I added from Ambari under custom yarn-site.