Support Questions

Find answers, ask questions, and share your expertise

CDH5.2: yarn :Error starting yarn nodemanagers

avatar
Explorer

 

Trying to start yarn when i get the following error on some of the nodes , anyone see this before?  ( Not sure what caused this corruption since yarnm was running ok for a couple of days )

If the files expected are missing, how to recover to prior state ? 

 

Error starting NodeManager
org.apache.hadoop.service.ServiceStateException: org.fusesource.leveldbjni.internal.NativeDB$DBException: Corruption: 3 missing files; e.g.: /tmp/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/000032.sst
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartRecoveryStore(NodeManager.java:152)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:190)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:445)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:492)
Caused by: org.fusesource.leveldbjni.internal.NativeDB$DBException: Corruption: 3 missing files; e.g.: /tmp/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/000032.sst
at org.fusesource.leveldbjni.internal.NativeDB.checkStatus(NativeDB.java:200)
at org.fusesource.leveldbjni.internal.NativeDB.open(NativeDB.java:218)
at org.fusesource.leveldbjni.JniDBFactory.open(JniDBFactory.java:168)
at org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.initStorage(NMLeveldbStateStoreService.java:842)
at org.apache.hadoop.yarn.server.nodemanager.recovery.NMStateStoreService.serviceInit(NMStateStoreService.java:195)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
org.apache.hadoop.service.ServiceStateException: org.fusesource.leveldbjni.internal.NativeDB$DBException: Corruption: 3 missing files; e.g.: /tmp/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/000032.sst

 

 

 

1 ACCEPTED SOLUTION

avatar
Explorer

Fixed the issue by deleting /tmp/hadoop-yarn/yarn-nm-recovery. LevelDB never writes in place. It always appends to a log file.

View solution in original post

10 REPLIES 10

avatar
Explorer

same issue for me too

 

chmod: changing permissions of `/var/run/cloudera-scm-agent/process/3669-yarn-NODEMANAGER/container-executor.cfg': Operation not permitted
chmod: changing permissions of `/var/run/cloudera-scm-agent/process/3669-yarn-NODEMANAGER/topology.map': Operation not permitted
+ exec /usr/lib/hadoop-yarn/bin/yarn nodemanager

avatar
This is a totally different issue, as the error messages are different.

What version of Cloudera Manager are you using? This may be a problem with /var/run being a noexec mount by default on your OS, which CM works around in more recent versions.

avatar
Super Collaborator

Praveen,

 

This does not loo like the NM recovery issue.

 

For this case can you tell me when this happens? This sounds and looks like the agent trying to change the permissions during the distribution. The two files have special settings and as dlo said in his update it is most likely a non execute mount or directory permission issue.

 

Wilfred

avatar
Explorer

Fixed the issue by deleting /tmp/hadoop-yarn/yarn-nm-recovery. LevelDB never writes in place. It always appends to a log file.

avatar
New Contributor

I have tried the solutions mentioned but still getting the ERROR. Its CDH5.7.Please help me to get it resolved.

 

Error starting NodeManager
org.apache.hadoop.service.ServiceStateException: EPERM: Operation not permitted
	at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:474)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:521)
Caused by: EPERM: Operation not permitted
	at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method)
	at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:230)
	at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:660)
	at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:452)
	at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:309)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartRecoveryStore(NodeManager.java:152)
	at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:195)
	at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
	... 2 more

 

avatar
Super Collaborator

Sidharth,

 

Please create a new thread for a new issue, re-using an old thread could lead to strange comments when people make assumptions based on irrelevant information.

 

For your issue: EPERM means that the OS is not allowing you to create the NM recovery DB and you have recovery turned on. Check the access to the recovery DB directory that you have configured.

 

Wilfred

avatar
Super Collaborator

Hi Harsha,

 

This is a known issue with NM and restart recovrey turned on. We are not 100% sure how and why it happens yet and are looking for as much data as we can. Before we fix this please make a copy of the whole directory and zip it up :

  tar czf  yarn-recovery.tgz /tmp/hadoop-yarn

After you have done that remove the directory and start it again.

 

Can you also tell me how long the NM was up for and if you have a /tmp cleaner running on that host?

 

Thank you,

 

Wlfred

avatar
Explorer

Hi Wilfred,

Sorry for the late reply, never got notifed about movement on this thread..

I was able to resolve it then by having the /tmp/.../yarn-nm-state dir deleted and retstarting yarn..

But, to answer your question:

The NM was up atleast for a week and there may have been a /tmp cleaner for large files only..

 

Do you have any more info as to why the issue occurs and timeline when this issue could be fixed?

 

Thanks

 

 

avatar
Super Collaborator

We have made a configuration  change in Cloudera Manager 5.2.1 which solves this issue. After upgrading the files will be moved to a different area which is not affected by the tmp cleaner.

 

Wilfred