Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Resource Manager - java.lang.OutOfMemoryError: unable to create new native thread

avatar
Explorer

Hello.

I'm running HDP 3.1.0 and Yarn resource Manager does not start, always crash with the below error, no matter the amount of Heap memory specified:

 

2020-09-28 15:12:57,738 INFO ipc.Server (Server.java:run(1074)) - Starting Socket Reader #1 for port 8030
2020-09-28 15:12:57,747 INFO pb.RpcServerFactoryPBImpl (RpcServerFactoryPBImpl.java:createServer(173)) - Adding protocol org.apache.hadoop.yarn.api.ApplicationMasterProtocolPB to the server
2020-09-28 15:12:57,748 FATAL resourcemanager.ResourceManager (ResourceManager.java:main(1516)) - Error starting ResourceManager
java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at org.apache.hadoop.ipc.Server.start(Server.java:3070)
at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.serviceStart(ApplicationMasterService.java:210)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:121)
at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager$RMActiveServices.serviceStart(ResourceManager.java:869)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)

 

Please, any tip will be appreciated.

 

Thanks

2 REPLIES 2

avatar
Super Guru

@gnavarro54 This error is suggesting that your yarn node is out of memory.   You need to inspect what services are running on your node may be causing it to not have enough memory to start yarn.  If there are to o many services for your node spec (amount of ram) then all of the services may not be able to start.   If possible add more ram, or considering removing some services from that node to another node.  

 

If you respond back with more details about your environment, myself or other community members can respond with deeper details.

 

 

If this answer resolves your issue or allows you to move forward, please choose to ACCEPT this solution and close this topic. If you have further dialogue on this topic please comment here or feel free to private message me. If you have new questions related to your Use Case please create separate topic and feel free to tag me in your post.  

 

Thanks,


Steven

avatar
Explorer

Hi steven.

Thanks for the quick response.

I'm running this HDP cluster in SUSE 12 SP2.  This node has 32 GB RAM and using just 4.  Free RAM is 27 GB.

Yarn Configuration is like this:

ResourceManager Java heap size = 2048

NodeManager Java heap size = 1024

AppTimelineServer Java heap size = 8072

 

ulimit used by RM process is:

core file size (blocks, -c) unlimited
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 128615
max locked memory (kbytes, -l) 64
max memory size (kbytes, -m) unlimited
open files (-n) 32768
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 65536
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited

---

From RM log file:

2020-09-29 17:15:00,825 INFO scheduler.AbstractYarnScheduler (AbstractYarnScheduler.java:getMinimumAllocation(1367)) - Minimum allocation = <memory:1024, vCores:1>
2020-09-29 17:15:00,825 INFO scheduler.AbstractYarnScheduler (AbstractYarnScheduler.java:getMaximumAllocation(1379)) - Maximum allocation = <memory:24576, vCores:3

 

No matter how much memory is assigned to RM, it always fails with this Jana OoM.

What may be a recommended Java Memory configuration for Yarn components?