- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
CDH5.2: yarn :Error starting yarn nodemanagers
- Labels:
-
Apache Hadoop
-
Apache YARN
Created on ‎11-17-2014 11:28 AM - edited ‎09-16-2022 02:13 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Trying to start yarn when i get the following error on some of the nodes , anyone see this before? ( Not sure what caused this corruption since yarnm was running ok for a couple of days )
If the files expected are missing, how to recover to prior state ?
Error starting NodeManager
org.apache.hadoop.service.ServiceStateException: org.fusesource.leveldbjni.internal.NativeDB$DBException: Corruption: 3 missing files; e.g.: /tmp/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/000032.sst
at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartRecoveryStore(NodeManager.java:152)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:190)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:445)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:492)
Caused by: org.fusesource.leveldbjni.internal.NativeDB$DBException: Corruption: 3 missing files; e.g.: /tmp/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/000032.sst
at org.fusesource.leveldbjni.internal.NativeDB.checkStatus(NativeDB.java:200)
at org.fusesource.leveldbjni.internal.NativeDB.open(NativeDB.java:218)
at org.fusesource.leveldbjni.JniDBFactory.open(JniDBFactory.java:168)
at org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.initStorage(NMLeveldbStateStoreService.java:842)
at org.apache.hadoop.yarn.server.nodemanager.recovery.NMStateStoreService.serviceInit(NMStateStoreService.java:195)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
org.apache.hadoop.service.ServiceStateException: org.fusesource.leveldbjni.internal.NativeDB$DBException: Corruption: 3 missing files; e.g.: /tmp/hadoop-yarn/yarn-nm-recovery/yarn-nm-state/000032.sst
Created ‎01-21-2015 04:03 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Fixed the issue by deleting /tmp/hadoop-yarn/yarn-nm-recovery. LevelDB never writes in place. It always appends to a log file.
Created ‎11-18-2014 07:50 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
same issue for me too
chmod: changing permissions of `/var/run/cloudera-scm-agent/process/3669-yarn-NODEMANAGER/container-executor.cfg': Operation not permitted
chmod: changing permissions of `/var/run/cloudera-scm-agent/process/3669-yarn-NODEMANAGER/topology.map': Operation not permitted
+ exec /usr/lib/hadoop-yarn/bin/yarn nodemanager
Created ‎11-19-2014 11:52 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
What version of Cloudera Manager are you using? This may be a problem with /var/run being a noexec mount by default on your OS, which CM works around in more recent versions.
Created ‎11-20-2014 05:03 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Praveen,
This does not loo like the NM recovery issue.
For this case can you tell me when this happens? This sounds and looks like the agent trying to change the permissions during the distribution. The two files have special settings and as dlo said in his update it is most likely a non execute mount or directory permission issue.
Wilfred
Created ‎01-21-2015 04:03 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Fixed the issue by deleting /tmp/hadoop-yarn/yarn-nm-recovery. LevelDB never writes in place. It always appends to a log file.
Created on
‎05-24-2016
11:48 PM
- last edited on
‎05-25-2016
05:14 AM
by
cjervis
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have tried the solutions mentioned but still getting the ERROR. Its CDH5.7.Please help me to get it resolved.
Error starting NodeManager org.apache.hadoop.service.ServiceStateException: EPERM: Operation not permitted at org.apache.hadoop.service.ServiceStateException.convert(ServiceStateException.java:59) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:172) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:474) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:521) Caused by: EPERM: Operation not permitted at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmodImpl(Native Method) at org.apache.hadoop.io.nativeio.NativeIO$POSIX.chmod(NativeIO.java:230) at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:660) at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:452) at org.apache.hadoop.fs.FilterFileSystem.mkdirs(FilterFileSystem.java:309) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartRecoveryStore(NodeManager.java:152) at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:195) at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163) ... 2 more
Created ‎05-25-2016 12:34 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Sidharth,
Please create a new thread for a new issue, re-using an old thread could lead to strange comments when people make assumptions based on irrelevant information.
For your issue: EPERM means that the OS is not allowing you to create the NM recovery DB and you have recovery turned on. Check the access to the recovery DB directory that you have configured.
Wilfred
Created ‎11-20-2014 04:59 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Harsha,
This is a known issue with NM and restart recovrey turned on. We are not 100% sure how and why it happens yet and are looking for as much data as we can. Before we fix this please make a copy of the whole directory and zip it up :
tar czf yarn-recovery.tgz /tmp/hadoop-yarn
After you have done that remove the directory and start it again.
Can you also tell me how long the NM was up for and if you have a /tmp cleaner running on that host?
Thank you,
Wlfred
Created ‎02-18-2015 11:04 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Wilfred,
Sorry for the late reply, never got notifed about movement on this thread..
I was able to resolve it then by having the /tmp/.../yarn-nm-state dir deleted and retstarting yarn..
But, to answer your question:
The NM was up atleast for a week and there may have been a /tmp cleaner for large files only..
Do you have any more info as to why the issue occurs and timeline when this issue could be fixed?
Thanks
Created ‎02-19-2015 10:33 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We have made a configuration change in Cloudera Manager 5.2.1 which solves this issue. After upgrading the files will be moved to a different area which is not affected by the tmp cleaner.
Wilfred
