- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Problem starting a nodemanager
- Labels:
-
Apache Hadoop
-
Apache YARN
Created on ‎05-19-2015 11:30 AM - edited ‎09-16-2022 02:29 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Folks,
I have a problem starting a node manager (actually two) the third is starting without any problem.
The failing node manager has this fatal error :
Error starting NodeManager
java.lang.UnsatisfiedLinkError: Could not load library. Reasons: [no leveldbjni64-1.8 in java.library.path, no leveldbjni-1.8 in java.library.path, no leveldbjni in java.library.path, /tmp/libleveldbjni-64-1-1006449310407885041.8: /tmp/libleveldbjni-64-1-1006449310407885041.8: failed to map segment from shared object: Operation not permitted]
at org.fusesource.hawtjni.runtime.Library.doLoad(Library.java:182)
at org.fusesource.hawtjni.runtime.Library.load(Library.java:140)
at org.fusesource.leveldbjni.JniDBFactory.<clinit>(JniDBFactory.java:48)
at org.apache.hadoop.yarn.server.nodemanager.recovery.NMLeveldbStateStoreService.initStorage(NMLeveldbStateStoreService.java:864)
at org.apache.hadoop.yarn.server.nodemanager.recovery.NMStateStoreService.serviceInit(NMStateStoreService.java:195)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartRecoveryStore(NodeManager.java:155)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.serviceInit(NodeManager.java:193)
at org.apache.hadoop.service.AbstractService.init(AbstractService.java:163)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:462)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:509)
We're using parcels and tried to verify permissions but dont see any problem. Even the process run from the agent is detecting the right version and path for the parcel. The host inspector is not reporting anuy problem.
Any help will be more than appreciated.
Best regards,
Mehdi
Created ‎05-19-2015 04:49 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Gautam Gopalakrishnan
Created ‎05-19-2015 04:49 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Gautam Gopalakrishnan
Created ‎05-20-2015 02:11 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you very much Gautam for your quick answer.
Indeed I can see the noexec option set on the /tmp on the nodes having the problem.
We're fixing that and will let you know of what will come after removing the option.
Created ‎05-20-2015 03:22 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Now I can confirm that removing noexec has solved the problem.
By the way we removed it by executing as root mount -o remount,exec /tmp
Than you again Gautam
Created ‎12-10-2015 01:33 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for this reply, super helpful!
I just starting getting this same issue on only one of many NodeManager hosts. All the other node managers (like 60) have the noexec option set on the /tmp mount and worked fine, but one in particular didn't. Why does this sometimes fix it but sometimes things work fine? What interaction does the noexec option have with java native library loading?
Created ‎05-09-2016 10:22 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Gautam,
Is there any other way to fix this? I can not provide exec permission to /tmp in my environment due to some security reasons.
Helps much appriciated 🙂
Created ‎05-10-2016 12:11 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Vik11, you can work around this by setting Java's tmp dir to something other than /tmp. This solution has worked in the past for a customer, YMMV. Of course that mount point should not have noexec set.
In YARN configuration append '-Djava.io.tmpdir=/path/to/other/temp/dir' to the following properties:
1. ApplicationMaster Java Opts Base
2. Java Configuration Options for JobHistory Server
3. Java Configuration Options for NodeManager
4. Java Configuration Options for ResourceManager
For jobs:
Cloudera Manager --> YARN --> search for: Gateway Client Environment Advanced Configuration Snippet (Safety Valve) for hadoop-env.sh and add this:
HADOOP_CLIENT_OPTS="-Djava.io.tmpdir=/path/to/other/temp/dir"
Now redeploy YARN client configuration.
Gautam Gopalakrishnan
Created ‎05-10-2016 02:07 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Great . Thanks Goutam 🙂
So apart from yarn will there be any other CDH services (Hive, Pig, Flume, Spark, Impala, Hue, Oozie, Hbase) that too will require exec permission on /tmp ?
Having noexec on /tmp will have any problem to cluster functioning?
What would be your reccomandation here.
Thanks,
