Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

HDFS service failed to start due to ulimit error

avatar
Expert Contributor

kindly find attached file & below mentioned error description.

resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ; /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode'' returned 1. -su: line 0: ulimit: core file size: cannot modify limit: Operation not permitted starting namenode,

1 ACCEPTED SOLUTION

avatar
Cloudera Employee

@hardik desai One common cause is where the administrator has set a limit on the core size and we are trying to move it to unlimited (more than what the administrator has set), which is not permitted.

One of the ways to check would be to check:

/etc/security/limits.conf

ulimit -a (as root or equivalent account)

sudo -i -u hdfs ulimit -a (as root or equivalent account)

See if that helps.

View solution in original post

5 REPLIES 5

avatar
Cloudera Employee

@hardik desai One common cause is where the administrator has set a limit on the core size and we are trying to move it to unlimited (more than what the administrator has set), which is not permitted.

One of the ways to check would be to check:

/etc/security/limits.conf

ulimit -a (as root or equivalent account)

sudo -i -u hdfs ulimit -a (as root or equivalent account)

See if that helps.

avatar
Expert Contributor

@sbathe, thanks for the reply. i have tried it but no luck..

Kindly find the attached file log-ulimit.txt

avatar
Cloudera Employee

@hardik desai are both the logs from the same node? The earlier logs all mention slave0 where as your current ulimit output is from slave3. Either ways, the reason for name node not starting is not the ulimit failure but:

"""

2017-01-17 13:25:13,137 INFO namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(362)) - Recovering unfinalized segments in /volumes/disk1/hadoop/hdfs/namenode/current 2017-01-17 13:25:13,300 WARN namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(690)) - Encountered exception loading fsimage java.io.IOException: Gap in transactions. Expected to be able to read up until at least txid 925857 but unable to find any edit logs containing txid 914240 at org.apache.hadoop.hdfs.server.namenode.FSEditLog.checkForGaps(FSEditLog.java:1577) at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1535) at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:652) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:294) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:983) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:688) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:662) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:726) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:951) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:935) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1641) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1707)"""

So, looks like you have some recovery to do.

avatar
Cloudera Employee

avatar
New Contributor

sbathe sir am also having the same error cannot modify ulimit -c operation not permitted but i tried in all ways i cannot able to start can you plz help me out from this sir