Support Questions

Find answers, ask questions, and share your expertise

Resource Manager - java.lang.OutOfMemoryError: unable to create new native thread



I'm running HDP 3.1.0 and Yarn resource Manager does not start, always crash with the below error, no matter the amount of Heap memory specified:


2020-09-28 15:12:57,738 INFO ipc.Server ( - Starting Socket Reader #1 for port 8030
2020-09-28 15:12:57,747 INFO pb.RpcServerFactoryPBImpl ( - Adding protocol org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB to the server
2020-09-28 15:12:57,748 FATAL resourcemanager.ResourceManager ( - Error starting ResourceManager
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(
at org.apache.hadoop.ipc.Server.start(
at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.serviceStart(
at org.apache.hadoop.service.AbstractService.start(
at org.apache.hadoop.service.CompositeService.serviceStart(
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(
at org.apache.hadoop.service.AbstractService.start(


Please, any tip will be appreciated.




@gnavarro54 This error is suggesting that your yarn node is out of memory.   You need to inspect what services are running on your node may be causing it to not have enough memory to start yarn.  If there are to o many services for your node spec (amount of ram) then all of the services may not be able to start.   If possible add more ram, or considering removing some services from that node to another node.  


If you respond back with more details about your environment, myself or other community members can respond with deeper details.



If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post.  





Hi steven.

Thanks for the quick response.

I'm running this HDP cluster in SUSE 12 SP2.  This node has 32 GB RAM and using just 4.  Free RAM is 27 GB.

Yarn Configuration is like this:

ResourceManager Java heap size = 2048

NodeManager Java heap size = 1024

AppTimelineServer Java heap size = 8072


ulimit used by RM process is:

core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 128615
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 32768
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited


From RM log file:

2020-09-29 17:15:00,825 INFO scheduler.AbstractYarnScheduler ( - Minimum allocation = <memory:1024, vCores:1>
2020-09-29 17:15:00,825 INFO scheduler.AbstractYarnScheduler ( - Maximum allocation = <memory:24576, vCores:3


No matter how much memory is assigned to RM, it always fails with this Jana OoM.

What may be a recommended Java Memory configuration for Yarn components? 




Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.