Member since
02-10-2019
5
Posts
0
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4872 | 01-17-2017 09:02 AM |
01-17-2017
12:49 PM
@hardik desai Check out https://community.hortonworks.com/questions/6720/namenode-txid-error.html on your recovery options.
... View more
01-17-2017
12:46 PM
@hardik desai are both the logs from the same node? The earlier logs all mention slave0 where as your current ulimit output is from slave3. Either ways, the reason for name node not starting is not the ulimit failure but: """ 2017-01-17 13:25:13,137 INFO namenode.FileJournalManager (FileJournalManager.java:recoverUnfinalizedSegments(362)) - Recovering unfinalized segments in /volumes/disk1/hadoop/hdfs/namenode/current
2017-01-17 13:25:13,300 WARN namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(690)) - Encountered exception loading fsimage
java.io.IOException: Gap in transactions. Expected to be able to read up until at least txid 925857 but unable to find any edit logs containing txid 914240
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.checkForGaps(FSEditLog.java:1577)
at org.apache.hadoop.hdfs.server.namenode.FSEditLog.selectInputStreams(FSEditLog.java:1535)
at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:652)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:294)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:983)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:688)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:662)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:726)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:951)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:935)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1641)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1707)""" So, looks like you have some recovery to do.
... View more
01-17-2017
09:02 AM
@hardik desai One common cause is where the administrator has set a limit on the core size and we are trying to move it to unlimited (more than what the administrator has set), which is not permitted. One of the ways to check would be to check: /etc/security/limits.conf ulimit -a (as root or equivalent account) sudo -i -u hdfs ulimit -a (as root or equivalent account) See if that helps.
... View more
01-17-2017
05:29 AM
@Sia Can you elaborate on what you mean by "user account in hip platform" ? As it is, I see there aren no "official" hooks for ansible available via upstream community. But there seem to be some quite interesting and somewhat actively maintained efforts, they maybe of some help to you. https://github.com/dobachi/ansible-hadoop https://github.com/rackerlabs/ansible-hadoop
... View more