Support Questions

Find answers, ask questions, and share your expertise

Yarn went down on my cluster. I restarted the service but it didn't work. Does it have any thing to do with housekeeping on Cluster ?

avatar
Contributor

Error starting NodeManager

java.lang.NullPointerException

at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recoverContainer(ContainerManagerImpl.java:289)

at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.recover(ContainerManagerImpl.java:252)

at org.apache.hadoop.yarn.server.nodemanager.containermanager.ContainerManagerImpl.serviceInit(ContainerManagerImpl.java:235)

at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)

at org.apache.hadoop.service.CompositeService.serviceInit(CompositeService.java:107)

at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:250)

at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)

at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:445)

at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:492)

1 ACCEPTED SOLUTION

avatar
Master Mentor

@Pranshu Pranshu

Login to datanodes or nodemanager node

cd /var/log/hadoop/hdfs

ls -lrt

look into the last log

for example: hadoop-hdfs-datanode-nodename.log

This log will give you the details. It can be related to the memory

View solution in original post

11 REPLIES 11

avatar
Master Mentor

@Pranshu Pranshu Need more information...Whats in the log files?

avatar
Contributor

@Neeraj Sabharwal Thanks for your assistance. I found the above details from the log at the instant when warning was generated. after that the service went down. Apart from this noting more was there in the log. Also the memory uses is just below the critical threshold capacity. We set 90 % for critical limit and right now it is 89 .4 %. So i mentioned is it related to memory issue.

avatar
Master Mentor

@Pranshu Pranshu

Login to datanodes or nodemanager node

cd /var/log/hadoop/hdfs

ls -lrt

look into the last log

for example: hadoop-hdfs-datanode-nodename.log

This log will give you the details. It can be related to the memory

avatar
Contributor
@Neeraj Sabharwal

got the logs.seems like It is related to memory issue. Unfortunately i don't have permissions to delete it. Can I create a soft link to move the data around as a work around. Can you please assist if I create a soft link for any lib, will it move the present data or the upcoming data or both ?

avatar
Master Mentor

@Pranshu Pranshu

Are you planning to delete logs? I don't suggest to create symlink without knowing the exact details.

Are you planning to move hdfs data?

avatar
Contributor
@Neeraj Sabharwal

Yes I was thinking to move the data, deletion is out of my boundaries as per my role.

avatar
Master Mentor

@Pranshu Pranshu Why do yo want to move the data? Is system running out of space?

avatar
Contributor
@Neeraj Sabharwal

yes the system is running out of space. Can you please suggest me a better way rather than creating a soft link.

avatar
Master Mentor

Is it prod?