Support Questions

Find answers, ask questions, and share your expertise

Permission denied error during NameNode start

avatar
Expert Contributor

Today I observed that the NameNode was in red. I tried to restart the server, but I found the following errors. As per the suggestions mentioned in various threads, I formatted the NameNode, by using the command "hadoop namenode -format". I was logged in as 'root', when I did it. When I saw this error, I reformatted the namenode as 'hdfs' user, but I still see the followign errors. Can anyone help me understand what is going wrong.

2016-03-17 17:27:02,305 WARN namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(683)) - Encountered exception loading fsimage java.io.FileNotFoundException: /hadoop/hdfs/namenode/current/VERSION (Permission denied)

3 REPLIES 3

avatar
Master Guru

Can you have a look into that folder? You specified the namenode directory to /hadoop so it either doesn't exist or you don't have access to it or it got corrupted.

Also formatting the namenode is dangerous since it also deletes all files in the cluster. Jobs may not work anymore because the libraries are missing etc. ( You can find what to setup in the manual HDP installation guide )

avatar
Expert Contributor

Ok. I have fixed this issue. I had to change the owner of the folder /hadoop/hdfs/namenode/ to hdfs user and hdfs group. I executed the following and everything is back to normal.

chown -R hdfs:hdfs /hadoop/hdfs/namenode

avatar
Expert Contributor

I am getting the same issue.

I did tried the above command but still getiing the error

i was trying to install ambari 2.7.3.0 and hdp 3.1.0.

Logs:-

2019-03-13 18:08:02,234 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-03-13 18:08:02,248 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2019-03-13 18:08:02,350 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-03-13 18:08:02,355 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2019-03-13 18:08:02,355 - Group['hdfs'] {}
2019-03-13 18:08:02,356 - Group['hadoop'] {}
2019-03-13 18:08:02,356 - Group['users'] {}
2019-03-13 18:08:02,357 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-03-13 18:08:02,357 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2019-03-13 18:08:02,358 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2019-03-13 18:08:02,358 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2019-03-13 18:08:02,359 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2019-03-13 18:08:02,360 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2019-03-13 18:08:02,363 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2019-03-13 18:08:02,363 - Group['hdfs'] {}
2019-03-13 18:08:02,363 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
2019-03-13 18:08:02,364 - FS Type: HDFS
2019-03-13 18:08:02,364 - Directory['/etc/hadoop'] {'mode': 0755}
2019-03-13 18:08:02,375 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-03-13 18:08:02,375 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2019-03-13 18:08:02,388 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2019-03-13 18:08:02,392 - Skipping Execute[('setenforce', '0')] due to not_if
2019-03-13 18:08:02,392 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2019-03-13 18:08:02,393 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2019-03-13 18:08:02,393 - Changing owner for /var/run/hadoop from 1014 to root
2019-03-13 18:08:02,394 - Changing group for /var/run/hadoop from 1001 to root
2019-03-13 18:08:02,394 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'}
2019-03-13 18:08:02,394 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2019-03-13 18:08:02,397 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2019-03-13 18:08:02,398 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2019-03-13 18:08:02,401 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2019-03-13 18:08:02,409 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2019-03-13 18:08:02,409 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2019-03-13 18:08:02,410 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2019-03-13 18:08:02,412 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2019-03-13 18:08:02,415 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2019-03-13 18:08:02,417 - Skipping unlimited key JCE policy check and setup since it is not required
2019-03-13 18:08:02,612 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2019-03-13 18:08:02,613 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2019-03-13 18:08:02,628 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2019-03-13 18:08:02,640 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2019-03-13 18:08:02,643 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2019-03-13 18:08:02,644 - XmlConfig['hadoop-policy.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...}
2019-03-13 18:08:02,650 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-policy.xml
2019-03-13 18:08:02,650 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-03-13 18:08:02,656 - XmlConfig['ssl-client.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...}
2019-03-13 18:08:02,662 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/ssl-client.xml
2019-03-13 18:08:02,662 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-03-13 18:08:02,666 - Directory['/usr/hdp/3.1.0.0-78/hadoop/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2019-03-13 18:08:02,666 - XmlConfig['ssl-client.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf/secure', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...}
2019-03-13 18:08:02,672 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/secure/ssl-client.xml
2019-03-13 18:08:02,672 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-03-13 18:08:02,676 - XmlConfig['ssl-server.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'hdfs', 'configurations': ...}
2019-03-13 18:08:02,682 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/ssl-server.xml
2019-03-13 18:08:02,682 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-03-13 18:08:02,686 - XmlConfig['hdfs-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'dfs.datanode.failed.volumes.tolerated': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2019-03-13 18:08:02,692 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/hdfs-site.xml
2019-03-13 18:08:02,692 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-03-13 18:08:02,722 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/3.1.0.0-78/hadoop/conf', 'xml_include_file': None, 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2019-03-13 18:08:02,728 - Generating config: /usr/hdp/3.1.0.0-78/hadoop/conf/core-site.xml
2019-03-13 18:08:02,728 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2019-03-13 18:08:02,746 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2019-03-13 18:08:02,749 - Directory['/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2019-03-13 18:08:02,749 - Changing group for /hadoop/hdfs/namenode from 1002 to hadoop
2019-03-13 18:08:02,750 - Directory['/data/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2019-03-13 18:08:02,757 - Directory['/usr/lib/ambari-logsearch-logfeeder/conf'] {'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2019-03-13 18:08:02,757 - Generate Log Feeder config file: /usr/lib/ambari-logsearch-logfeeder/conf/input.config-hdfs.json
2019-03-13 18:08:02,757 - File['/usr/lib/ambari-logsearch-logfeeder/conf/input.config-hdfs.json'] {'content': Template('input.config-hdfs.json.j2'), 'mode': 0644}
2019-03-13 18:08:02,757 - Skipping setting up secure ZNode ACL for HFDS as it's supported only for NameNode HA mode.
2019-03-13 18:08:02,760 - Called service start with upgrade_type: None
2019-03-13 18:08:02,760 - Ranger Hdfs plugin is not enabled
2019-03-13 18:08:02,761 - File['/etc/hadoop/conf/dfs.exclude'] {'owner': 'hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'}
2019-03-13 18:08:02,761 - /hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted
2019-03-13 18:08:02,761 - /data/hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted
2019-03-13 18:08:02,761 - Directory['/hadoop/hdfs/namenode/namenode-formatted/'] {'create_parents': True}
2019-03-13 18:08:02,762 - Directory['/data/hadoop/hdfs/namenode/namenode-formatted/'] {'create_parents': True}
2019-03-13 18:08:02,762 - Options for start command are: 
2019-03-13 18:08:02,762 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2019-03-13 18:08:02,762 - Changing owner for /var/run/hadoop from 0 to hdfs
2019-03-13 18:08:02,762 - Changing group for /var/run/hadoop from 0 to hadoop
2019-03-13 18:08:02,762 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2019-03-13 18:08:02,763 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2019-03-13 18:08:02,763 - File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2019-03-13 18:08:02,771 - Deleting File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid']
2019-03-13 18:08:02,771 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/3.1.0.0-78/hadoop/bin/hdfs --config /usr/hdp/3.1.0.0-78/hadoop/conf --daemon start namenode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/3.1.0.0-78/hadoop/libexec'}, 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2019-03-13 18:08:04,854 - Execute['find /var/log/hadoop/hdfs -maxdepth 1 -type f -name '*' -exec echo '==> {} <==' \; -exec tail -n 40 {} \;'] {'logoutput': True, 'ignore_failures': True, 'user': 'hdfs'}
==> /var/log/hadoop/hdfs/gc.log-201903131801 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16337780k(5591480k free), swap 15624188k(15624188k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2019-03-13T18:01:42.815+0530: 0.636: [GC (Allocation Failure) 2019-03-13T18:01:42.815+0530: 0.636: [ParNew: 104960K->8510K(118016K), 0.0072988 secs] 104960K->8510K(1035520K), 0.0073751 secs] [Times: user=0.02 sys=0.00, real=0.01 secs] 
2019-03-13T18:01:43.416+0530: 1.237: [GC (Allocation Failure) 2019-03-13T18:01:43.416+0530: 1.237: [ParNew: 113470K->10636K(118016K), 0.0296314 secs] 113470K->13333K(1035520K), 0.0296978 secs] [Times: user=0.08 sys=0.00, real=0.03 secs] 
2019-03-13T18:01:43.448+0530: 1.269: [GC (CMS Initial Mark) [1 CMS-initial-mark: 2697K(917504K)] 16041K(1035520K), 0.0012481 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2019-03-13T18:01:43.449+0530: 1.270: [CMS-concurrent-mark-start]
2019-03-13T18:01:43.451+0530: 1.272: [CMS-concurrent-mark: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 
2019-03-13T18:01:43.451+0530: 1.272: [CMS-concurrent-preclean-start]
2019-03-13T18:01:43.453+0530: 1.273: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 
202019-03-13T18:01:40.037+0530: 2.235: [CMS-concurrent-abortable-preclean-start]
 CMS: abort preclean due to time 2019-03-13T18:01:45.154+0530: 7.352: [CMS-concurrent-abortable-preclean: 1.299/5.117 secs] [Times: user=2.38 sys=0.02, real=5.11 secs] 
2019-03-13T18:01:45.155+0530: 7.353: [GC (CMS Final Remark) [YG occupancy: 48001 K (184320 K)]2019-03-13T18:01:45.155+0530: 7.353: [Rescan (parallel) , 0.0028621 secs]2019-03-13T18:01:45.158+0530: 7.356: [weak refs processing, 0.0000235 secs]2019-03-13T18:01:45.158+0530: 7.356: [class unloading, 0.0026645 secs]2019-03-13T18:01:45.160+0530: 7.358: [scrub symbol table, 0.0028774 secs]2019-03-13T18:01:45.163+0530: 7.361: [scrub string table, 0.0005686 secs][1 CMS-remark: 3414K(843776K)] 51415K(1028096K), 0.0094373 secs] [Times: user=0.02 sys=0.00, real=0.01 secs] 
2019-03-13T18:01:45.164+0530: 7.362: [CMS-concurrent-sweep-start]
2019-03-13T18:01:45.166+0530: 7.364: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2019-03-13T18:01:45.166+0530: 7.364: [CMS-concurrent-reset-start]
2019-03-13T18:01:45.167+0530: 7.365: [CMS-concurrent-reset: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 
8016K, used 85405K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  71% used [0x00000000c0000000, 0x00000000c49046a8, 0x00000000c6680000)
  from space 13056K,  81% used [0x00000000c6680000, 0x00000000c70e3040, 0x00000000c7340000)
  to   space 13056K,   0% used [0x00000000c7340000, 0x00000000c7340000, 0x00000000c8000000)
 concurrent mark-sweep generation total 917504K, used 2687K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 25931K, capacity 26396K, committed 26660K, reserved 1073152K
  class space    used 3050K, capacity 3224K, committed 3268K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-D-9033.kpit.com.log <==
    at org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:185)
    at org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1112)
    at org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:141)
    at org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:119)
    at org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:134)
    at org.eclipse.jetty.server.Server.handle(Server.java:539)
    at org.eclipse.jetty.server.HttpChannel.handle(HttpChannel.java:333)
    at org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:251)
    at org.eclipse.jetty.io.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:283)
    at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:108)
    at org.eclipse.jetty.io.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
    at org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:671)
    at org.eclipse.jetty.util.thread.QueuedThreadPool$2.run(QueuedThreadPool.java:589)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException: Storage not yet initialized
    at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
    at org.apache.hadoop.hdfs.server.datanode.DataNode.getVolumeInfo(DataNode.java:3136)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at sun.reflect.misc.Trampoline.invoke(MethodUtil.java:71)
    at sun.reflect.GeneratedMethodAccessor17.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at sun.reflect.misc.MethodUtil.invoke(MethodUtil.java:275)
    at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:193)
    at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:175)
    at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:117)
    at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:54)
    at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:237)
    at com.sun.jmx.mbeanserver.PerInterface.getAttribute(PerInterface.java:83)
    at com.sun.jmx.mbeanserver.MBeanSupport.getAttribute(MBeanSupport.java:206)
    at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.getAttribute(DefaultMBeanServerInterceptor.java:647)
    ... 35 more
2019-03-13 18:07:57,425 INFO  ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: D-9033.kpit.com/10.10.167.157:8020. Already tried 46 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-13 18:07:58,426 INFO  ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: D-9033.kpit.com/10.10.167.157:8020. Already tried 47 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-13 18:07:59,426 INFO  ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: D-9033.kpit.com/10.10.167.157:8020. Already tried 48 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-13 18:08:00,427 INFO  ipc.Client (Client.java:handleConnectionFailure(942)) - Retrying connect to server: D-9033.kpit.com/10.10.167.157:8020. Already tried 49 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=50, sleepTime=1000 MILLISECONDS)
2019-03-13 18:08:00,427 WARN  datanode.DataNode (BPServiceActor.java:retrieveNamespaceInfo(235)) - Problem connecting to server: D-9033.kpit.com/10.10.167.157:8020
==> /var/log/hadoop/hdfs/gc.log-201903131804 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16337780k(5549728k free), swap 15624188k(15624188k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2019-03-13T18:04:21.675+0530: 0.586: [GC (Allocation Failure) 2019-03-13T18:04:21.675+0530: 0.587: [ParNew: 104960K->8519K(118016K), 0.0063297 secs] 104960K->8519K(1035520K), 0.0064241 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 
2019-03-13T18:04:22.244+0530: 1.156: [GC (Allocation Failure) 2019-03-13T18:04:22.244+0530: 1.156: [ParNew: 113479K->10589K(118016K), 0.0206730 secs] 113479K->13287K(1035520K), 0.0207371 secs] [Times: user=0.06 sys=0.00, real=0.02 secs] 
2019-03-13T18:04:22.267+0530: 1.178: [GC (CMS Initial Mark) [1 CMS-initial-mark: 2697K(917504K)] 15328K(1035520K), 0.0021081 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 
2019-03-13T18:04:22.269+0530: 1.181: [CMS-concurrent-mark-start]
2019-03-13T18:04:22.271+0530: 1.183: [CMS-concurrent-mark: 0.003/0.003 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2019-03-13T18:04:22.271+0530: 1.183: [CMS-concurrent-preclean-start]
2019-03-13T18:04:22.273+0530: 1.185: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 
2019-03-13T18:04:22.273+0530: 1.185: [GC (CMS Final Remark) [YG occupancy: 12630 K (118016 K)]2019-03-13T18:04:22.273+0530: 1.185: [Rescan (parallel) , 0.0101631 secs]2019-03-13T18:04:22.283+0530: 1.195: [weak refs processing, 0.0000278 secs]2019-03-13T18:04:22.283+0530: 1.195: [class unloading, 0.0021329 secs]2019-03-13T18:04:22.285+0530: 1.197: [scrub symbol table, 0.0018142 secs]2019-03-13T18:04:22.287+0530: 1.199: [scrub string table, 0.0004759 secs][1 CMS-remark: 2697K(917504K)] 15328K(1035520K), 0.0150536 secs] [Times: user=0.04 sys=0.00, real=0.02 secs] 
2019-03-13T18:04:22.288+0530: 1.200: [CMS-concurrent-sweep-start]
2019-03-13T18:04:22.289+0530: 1.201: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.01 sys=0.01, real=0.00 secs] 
2019-03-13T18:04:22.289+0530: 1.201: [CMS-concurrent-reset-start]
2019-03-13T18:04:22.291+0530: 1.203: [CMS-concurrent-reset: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
Heap
 par new generation   total 118016K, used 85513K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  71% used [0x00000000c0000000, 0x00000000c492acf0, 0x00000000c6680000)
  from space 13056K,  81% used [0x00000000c6680000, 0x00000000c70d77e0, 0x00000000c7340000)
  to   space 13056K,   0% used [0x00000000c7340000, 0x00000000c7340000, 0x00000000c8000000)
 concurrent mark-sweep generation total 917504K, used 2685K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 25959K, capacity 26396K, committed 26820K, reserved 1073152K
  class space    used 3054K, capacity 3224K, committed 3296K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-D-9033.kpit.com.out.2 <==
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 63705
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/gc.log-201903131808 <==
Java HotSpot(TM) 64-Bit Server VM (25.112-b15) for linux-amd64 JRE (1.8.0_112-b15), built on Sep 22 2016 21:10:53 by "java_re" with gcc 4.3.0 20080428 (Red Hat 4.3.0-8)
Memory: 4k page, physical 16337780k(5498248k free), swap 15624188k(15624188k free)
CommandLine flags: -XX:CMSInitiatingOccupancyFraction=70 -XX:ErrorFile=/var/log/hadoop/hdfs/hs_err_pid%p.log -XX:InitialHeapSize=1073741824 -XX:MaxHeapSize=1073741824 -XX:MaxNewSize=134217728 -XX:MaxTenuringThreshold=6 -XX:NewSize=134217728 -XX:OldPLABSize=16 -XX:OnOutOfMemoryError="/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node" -XX:ParallelGCThreads=8 -XX:+PrintGC -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+UseCMSInitiatingOccupancyOnly -XX:+UseCompressedClassPointers -XX:+UseCompressedOops -XX:+UseConcMarkSweepGC -XX:+UseParNewGC 
2019-03-13T18:08:03.428+0530: 0.585: [GC (Allocation Failure) 2019-03-13T18:08:03.428+0530: 0.585: [ParNew: 104960K->8531K(118016K), 0.0072916 secs] 104960K->8531K(1035520K), 0.0073681 secs] [Times: user=0.02 sys=0.00, real=0.00 secs] 
2019-03-13T18:08:04.000+0530: 1.157: [GC (Allocation Failure) 2019-03-13T18:08:04.000+0530: 1.157: [ParNew: 113491K->10252K(118016K), 0.0362179 secs] 113491K->12950K(1035520K), 0.0362798 secs] [Times: user=0.12 sys=0.00, real=0.04 secs] 
2019-03-13T18:08:04.038+0530: 1.195: [GC (CMS Initial Mark) [1 CMS-initial-mark: 2697K(917504K)] 14990K(1035520K), 0.0014137 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 
2019-03-13T18:08:04.039+0530: 1.196: [CMS-concurrent-mark-start]
2019-03-13T18:08:04.041+0530: 1.198: [CMS-concurrent-mark: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2019-03-13T18:08:04.041+0530: 1.198: [CMS-concurrent-preclean-start]
2019-03-13T18:08:04.043+0530: 1.200: [CMS-concurrent-preclean: 0.001/0.001 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 
2019-03-13T18:08:04.043+0530: 1.200: [GC (CMS Final Remark) [YG occupancy: 12293 K (118016 K)]2019-03-13T18:08:04.043+0530: 1.200: [Rescan (parallel) , 0.0020385 secs]2019-03-13T18:08:04.045+0530: 1.202: [weak refs processing, 0.0001378 secs]2019-03-13T18:08:04.045+0530: 1.202: [class unloading, 0.0021708 secs]2019-03-13T18:08:04.047+0530: 1.204: [scrub symbol table, 0.0018312 secs]2019-03-13T18:08:04.049+0530: 1.206: [scrub string table, 0.0004769 secs][1 CMS-remark: 2697K(917504K)] 14990K(1035520K), 0.0070740 secs] [Times: user=0.01 sys=0.00, real=0.01 secs] 
2019-03-13T18:08:04.050+0530: 1.207: [CMS-concurrent-sweep-start]
2019-03-13T18:08:04.051+0530: 1.208: [CMS-concurrent-sweep: 0.001/0.001 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 
2019-03-13T18:08:04.051+0530: 1.208: [CMS-concurrent-reset-start]
2019-03-13T18:08:04.054+0530: 1.211: [CMS-concurrent-reset: 0.002/0.002 secs] [Times: user=0.01 sys=0.00, real=0.00 secs] 
Heap
 par new generation   total 118016K, used 85171K [0x00000000c0000000, 0x00000000c8000000, 0x00000000c8000000)
  eden space 104960K,  71% used [0x00000000c0000000, 0x00000000c4929988, 0x00000000c6680000)
  from space 13056K,  78% used [0x00000000c6680000, 0x00000000c70832b0, 0x00000000c7340000)
  to   space 13056K,   0% used [0x00000000c7340000, 0x00000000c7340000, 0x00000000c8000000)
 concurrent mark-sweep generation total 917504K, used 2685K [0x00000000c8000000, 0x0000000100000000, 0x0000000100000000)
 Metaspace       used 25960K, capacity 26396K, committed 26820K, reserved 1073152K
  class space    used 3054K, capacity 3224K, committed 3296K, reserved 1048576K
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-D-9033.kpit.com.out.1 <==
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 63705
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-D-9033.kpit.com.log <==
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:388)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:227)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1090)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:937)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:910)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
2019-03-13 18:08:04,298 INFO  handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.w.WebAppContext@22295ec4{/,null,UNAVAILABLE}{/hdfs}
2019-03-13 18:08:04,300 INFO  server.AbstractConnector (AbstractConnector.java:doStop(318)) - Stopped ServerConnector@f316aeb{HTTP/1.1,[http/1.1]}{D-9033.kpit.com:50070}
2019-03-13 18:08:04,300 INFO  handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@27216cd{/static,file:///usr/hdp/3.1.0.0-78/hadoop-hdfs/webapps/static/,UNAVAILABLE}
2019-03-13 18:08:04,300 INFO  handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@3d9c13b5{/logs,file:///var/log/hadoop/hdfs/,UNAVAILABLE}
2019-03-13 18:08:04,301 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(210)) - Stopping NameNode metrics system...
2019-03-13 18:08:04,302 INFO  impl.MetricsSinkAdapter (MetricsSinkAdapter.java:publishMetricsFromQueue(141)) - timeline thread interrupted.
2019-03-13 18:08:04,302 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(216)) - NameNode metrics system stopped.
2019-03-13 18:08:04,303 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(607)) - NameNode metrics system shutdown complete.
2019-03-13 18:08:04,303 ERROR namenode.NameNode (NameNode.java:main(1715)) - Failed to start namenode.
java.io.FileNotFoundException: /data/hadoop/hdfs/namenode/current/VERSION (Permission denied)
    at java.io.RandomAccessFile.open0(Native Method)
    at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
    at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
    at org.apache.hadoop.hdfs.server.common.StorageInfo.readPropertiesFile(StorageInfo.java:250)
    at org.apache.hadoop.hdfs.server.namenode.NNStorage.readProperties(NNStorage.java:660)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:388)
    at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:227)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1090)
    at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:937)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:910)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
    at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
2019-03-13 18:08:04,304 INFO  util.ExitUtil (ExitUtil.java:terminate(210)) - Exiting with status 1: java.io.FileNotFoundException: /data/hadoop/hdfs/namenode/current/VERSION (Permission denied)
2019-03-13 18:08:04,304 INFO  namenode.NameNode (LogAdapter.java:info(51)) - SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at D-9033.kpit.com/10.10.167.157
************************************************************/
==> /var/log/hadoop/hdfs/SecurityAuth.audit <==
==> /var/log/hadoop/hdfs/hadoop-hdfs-namenode-D-9033.kpit.com.out <==
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 63705
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
==> /var/log/hadoop/hdfs/hdfs-audit.log <==
==> /var/log/hadoop/hdfs/hadoop-hdfs-datanode-D-9033.kpit.com.out <==
core file size          (blocks, -c) unlimited
data seg size           (kbytes, -d) unlimited
scheduling priority             (-e) 0
file size               (blocks, -f) unlimited
pending signals                 (-i) 63705
max locked memory       (kbytes, -l) 64
max memory size         (kbytes, -m) unlimited
open files                      (-n) 128000
pipe size            (512 bytes, -p) 8
POSIX message queues     (bytes, -q) 819200
real-time priority              (-r) 0
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 65536
virtual memory          (kbytes, -v) unlimited
file locks                      (-x) unlimited
Please help