Support Questions

Find answers, ask questions, and share your expertise
Celebrating as our community reaches 100,000 members! Thank you!

Who agreed with this solution

Expert Contributor

Hey @GTA,


Thanks for reaching out to the Cloudera community.

"Required executor memory (1024), overhead (384 MB), and PySpark memory (0 MB) is above the max threshold (1024 MB) of this cluster!"


>> This issue occurs when the total memory required to run a spark executor in a container (Spark executor memory -> spark.executor.memory + Spark executor memory overhead: spark.yarn.executor.memoryOverhead) exceeds the memory available for running containers on the NodeManager (yarn.nodemanager.resource.memory-mb) node.


Based on the above exception you have 1 GB configured by default for a spark executor, the overhead is by default 384 MB, the total memory required to run the container is 1024+384 MB = 1408 MB.


As the NM was configured with not enough memory to even run a single container (only 1024 MB), this resulted in a valid exception.


Increasing the NM settings from 1251 to 2048 MB will definitely allow a single container to run on the NM node. Use the mentioned steps to increase "yarn.nodemanager.resource.memory-mb" parameter to resolve this.


Cloudera Manager >> YARN >> Configurations >> Search "yarn.nodemanager.resource.memory-mb" >> Configure 2048 MB or higher >> Save & Restart.


Let me know if this helps.

View solution in original post

Who agreed with this solution