Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Please see the Cloudera blog for information on the Cloudera Response to CVE-2021-4428

Unable to start namenode HDP 2.4.2.0,Not able to Start NameNode

New Contributor

Not able to start NameNode after installation. Getting the below error

!NameNode Start
stderr: /var/lib/ambari-agent/data/errors-1775.txtFile "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 291, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode'' returned 1. starting namenode, logging to /var/log/hadoop/hdfs/hadoop-hdfs-namenode-thdppca0.out

Namenode log file: ulimit -a for user hdfs core file size (blocks, -c) unlimited data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 128331 max locked memory (kbytes, -l) 64 max memory size (kbytes, -m) unlimited open files (-n) 128000 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 65536 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited

------------------------------------------------------

Ip :inet addr:10.47.84.5

-------------------------------------------------------

Need help to resolve this issue. Let me know if you more details needs to be shared.Thanks in advance

5 REPLIES 5

Rising Star

@Karthick T plz paste logs (/var/log/hadoop/hdfs/hadoop-hdfs-namenode-thdppca0.out/log)

New Contributor

2017-02-14 06:53:48,681 WARN namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(690)) - Encountered exception loading fsimage java.io.FileNotFoundException: /xxxxxx/hadoop/hdfs/namenode/current/VERSION (Permission denied) at java.io.RandomAccessFile.open0(Native Method) at java.io.RandomAccessFile.open(RandomAccessFile.java:316) at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243) at org.apache.hadoop.hdfs.server.common.StorageInfo.readPropertiesFile(StorageInfo.java:245) at org.apache.hadoop.hdfs.server.namenode.NNStorage.readProperties(NNStorage.java:627) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:339) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:215) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:983) at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:688) at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:662) at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:726) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:951) at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:935) at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1641) at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1707) 2017-02-14 06:53:48,685 INFO mortbay.log (Slf4jLog.java:info(67)) - Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@xxxxxx.xxxx.net:50070 2017-02-14 06:53:48,786 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(211)) - Stopping NameNode metrics system... 2017-02-14 06:53:48,787 INFO impl.MetricsSinkAdapter (MetricsSinkAdapter.java:publishMetricsFromQueue(141)) - timeline thread interrupted. 2017-02-14 06:53:48,788 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(217)) - NameNode metrics system stopped. 2017-02-14 06:53:48,788 INFO impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(605)) - NameNode metrics system shutdown complete. 2017-02-14 06:53:48,789 ERROR namenode.NameNode (NameNode.java:main(1712)) - Failed to start namenode. java.io.FileNotFoundException: /xxxxxx/hadoop/hdfs/namenode/current/VERSION (Permission denied) at java.io.RandomAccessFile.open0(Native Method) at java.io.RandomAccessFile.open(RandomAccessFile.java:316) at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243) at org.apache.hadoop.hdfs.server.common.StorageInfo.readPropertiesFile(StorageInfo.java:245) at org.apache.hadoop.hdfs.server.namenode.NNStorage.readProperties(NNStorage.java:627) at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:339)

=========================================================================== Should i change the permisson of /xxxxxxx/hadoop/hdfs/namenode/current/VERSION?? chown -R hdfs:hdfs /xxxxxxx/hadoop/hdfs/namenode/current/VERSION

thank you

New Contributor

I found the same dir and files created in two places. /var/hadoop/hdfs/namenode/current/ and also under hadoop/hdfs/namenode/current/. All have root:root as their owner -rw-r--r-- 1 root root 201 Feb 13 10:12 VERSION

Rising Star

Change owner for /trvapps/hadoop/hdfs/namenode/current/VERSION to user starting namenode.

Rising Star
@Karthick T

Seems user starting namenode (usually hdfs) doesn't have ownership of version file "/trvapps/hadoop/hdfs/namenode/current/VERSION". Try again after changing ownership.