Support Questions

Find answers, ask questions, and share your expertise

In ambari Namenode not started

avatar
Contributor

this is the error

 

2019-08-21 16:13:10,828 - The NameNode is still in Safemode. Please be careful with commands that need Safemode OFF. Traceback (most recent call last): File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HDFS/package/scripts/namenode.py", line 408, in <module> NameNode().execute() File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute method(env) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HDFS/package/scripts/namenode.py", line 138, in start upgrade_suspended=params.upgrade_suspended, env=env) File "/usr/lib/ambari-agent/lib/ambari_commons/os_family_impl.py", line 89, in thunk return fn(*args, **kwargs) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HDFS/package/scripts/hdfs_namenode.py", line 264, in namenode create_hdfs_directories(name_service) File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/HDFS/package/scripts/hdfs_namenode.py", line 336, in create_hdfs_directories nameservices=name_services File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__ self.env.run() File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run self.run_action(resource, action) File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action provider_action() File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 677, in action_create_on_execute self.action_delayed("create") File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 674, in action_delayed self.get_hdfs_resource_executor().action_delayed(action_name, self) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 373, in action_delayed self.action_delayed_for_nameservice(None, action_name, main_resource) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 395, in action_delayed_for_nameservice self._assert_valid() File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 334, in _assert_valid self.target_status = self._get_file_status(target) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 497, in _get_file_status list_status = self.util.run_command(target, 'GETFILESTATUS', method='GET', ignore_status_codes=['404'], assertable_result=False) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 214, in run_command return self._run_command(*args, **kwargs) File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/hdfs_resource.py", line 282, in _run_command _, out, err = get_user_call_output(cmd, user=self.run_user, logoutput=self.logoutput, quiet=False) File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/get_user_call_output.py", line 62, in get_user_call_output raise ExecutionFailed(err_msg, code, files_output[0], files_output[1]) resource_management.core.exceptions.ExecutionFailed: Execution of 'curl -sS -L -w '%{http_code}' -X GET -d '' -H 'Content-Length: 0' 'http://gaian-lap386.com:50070/webhdfs/v1/tmp?op=GETFILESTATUS&user.name=hdfs' 1>/tmp/tmpPLc7iw 2>/tmp/tmpCp00V8' returned 7. curl: (7) Failed to connect to gaian-lap386.com port 50070: Connection refused 000 

 

After i use this safemode leave command below result i got

 

hdfs dfsadmin -safemode leave
safemode: Call From gaian-lap386.com/192.168.24.32 to gaian-lap386.com:8020 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused

1 ACCEPTED SOLUTION

avatar
Master Mentor

@Manoj690 

Not the same output as mine. Please notice that,  In your case that file is being owned by "root" user.  Not the "hdfs" user. 

So please try this:

 

 

# chown hdfs:hadoop /hadoop/hdfs/namenode/current/VERSION

(OR recursively)
# chown -R hdfs:hadoop /hadoop/hdfs/

 

 

 

View solution in original post

7 REPLIES 7

avatar
Master Mentor

@Manoj690 

Looks like the NameNode is not starting successfully.  Can you please check and share the NameNode logs.

 

It looks like a similar thread is also opened by you here:   https://community.cloudera.com/t5/Support-Questions/In-ambari-all-my-servies-are-down/m-p/268512/hig...

Please use the same troubleshooting steps to collect the data for troubleshooting.

For example

Please check if the NameNode is running or not? 

# ps -ef | grep -i NameNode


If NameNode port is listening on 50070? 

# netstat -tnlpa | grep 50070

 

If not then please check and share the NameNode log. If possible then try to attach the Full NameNode logs.



avatar
Contributor

As i mentioned  name node is in safemode. 

 

ps -ef | grep -i NameNode
root 30876 5714 0 17:21 pts/3 00:00:00 grep --color=auto -i NameNode

 

netstat -tnlpa | grep 50070

 

and

how to check namenode log files 

avatar
Master Mentor

@Manoj690 

You can find NameNode logs here:

# less /var/log/hadoop/hdfs/hadoop-hdfs-namenode-gaian-lap386.com.log
# less /var/log/hadoop/hdfs/hadoop-hdfs-namenode-gaian-lap386.com.out

 

Similarly you can find the DataNode logs somewhere "/var/log/hadoop/hdfs/hadoop-hdfs-datanode-xxxxxxx.log"

Can you please attach those logs here. Or please share the erros from the mentioned files.   (preferred is to attach the files here)

avatar
Contributor

namenode log file

 

 

<2019-08-21 15:48:53,472 INFO  namenode.NameNode (LogAdapter.java:info(51)) - STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = gaian-lap386.com/192.168.24.32
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 3.1.1.3.1.0.0-78
STARTUP_MSG:   classpath = /usr/hdp/3.1.0.0-78/hadoop/conf:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-client-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-hdfs-plugin-shim-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-admin-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-xdr-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-json-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-net-3.6.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-logging-1.1.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/gson-2.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-lang3-3.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-client-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/log4j-1.2.17.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/protobuf-java-2.5.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-annotations-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/curator-framework-2.12.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-server-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/slf4j-api-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-math3-3.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-http-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/snappy-java-1.0.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-configuration2-2.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-server-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jul-to-slf4j-1.7.25.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-io-2.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/json-smart-2.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-mapper-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/zookeeper-3.4.6.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/nimbus-jose-jwt-4.41.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/accessors-smart-1.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/javax.servlet-api-3.1.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-core-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/ranger-plugin-classloader-1.2.0.3.1.0.0-78.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-jaxrs-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-core-asl-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jcip-annotations-1.0-1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-simplekdc-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsr311-api-1.1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/xz-1.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/avro-1.7.7.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/metrics-core-3.2.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/asm-5.0.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/re2j-1.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/netty-3.10.5.Final.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-webapp-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/guava-11.0.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-security-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/htrace-core4-4.1.0-incubating.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/httpcore-4.4.4.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-core-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/httpclient-4.5.2.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/commons-codec-1.11.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-common-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/woodstox-core-5.0.3.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-xc-1.9.13.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jetty-xml-9.3.24.v20180605.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-servlet-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jersey-server-1.19.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-core-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-pkix-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jackson-databind-2.9.5.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerb-util-1.0.1.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/jsr305-3.0.0.jar:/usr/hdp/3.1.0.0-78/hadoop/lib/kerby-asn1-1.0.1.jar:/usr/hdp/3.1.0.0-
STARTUP_MSG:   build = git@github.com:hortonworks/hadoop.git -r e4f82af51faec922b4804d0232a637422ec29e64; compiled by 'jenkins' on 2018-12-06T12:21Z
STARTUP_MSG:   java = 1.8.0_112
************************************************************/
2019-08-21 15:48:53,492 INFO  namenode.NameNode (LogAdapter.java:info(51)) - registered UNIX signal handlers for [TERM, HUP, INT]
2019-08-21 15:48:53,670 INFO  namenode.NameNode (NameNode.java:createNameNode(1583)) - createNameNode []
2019-08-21 15:48:54,016 INFO  impl.MetricsConfig (MetricsConfig.java:loadFirst(118)) - Loaded properties from hadoop-metrics2.properties
2019-08-21 15:48:54,967 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(85)) - Initializing Timeline metrics sink.
2019-08-21 15:48:54,969 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(105)) - Identified hostname = gaian-lap386.com, serviceName = namenode
2019-08-21 15:48:55,515 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(133)) - No suitable collector found.
2019-08-21 15:48:55,522 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(185)) - RPC port properties configured: {8020=client}
2019-08-21 15:48:55,551 INFO  impl.MetricsSinkAdapter (MetricsSinkAdapter.java:start(204)) - Sink timeline started
2019-08-21 15:48:55,796 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(374)) - Scheduled Metric snapshot period at 10 second(s).
2019-08-21 15:48:55,797 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) - NameNode metrics system started
2019-08-21 15:48:55,949 INFO  namenode.NameNodeUtils (NameNodeUtils.java:getClientNamenodeAddress(79)) - fs.defaultFS is hdfs://gaian-lap386.com:8020
2019-08-21 15:48:55,949 INFO  namenode.NameNode (NameNode.java:<init>(928)) - Clients should use gaian-lap386.com:8020 to access this namenode/service.
2019-08-21 15:48:56,719 INFO  util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause monitor
2019-08-21 15:48:56,768 INFO  hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1605)) - Starting Web-server for hdfs at: <a href="http://gaian-lap386.com:50070" target="_blank">http://gaian-lap386.com:50070</a>
2019-08-21 15:48:56,832 INFO  util.log (Log.java:initialized(192)) - Logging initialized @4311ms
2019-08-21 15:48:57,297 INFO  server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-08-21 15:48:57,351 INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(81)) - Http request log for http.requests.namenode is not defined
2019-08-21 15:48:57,405 INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(968)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2019-08-21 15:48:57,428 INFO  http.HttpServer2 (HttpServer2.java:addFilter(941)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context hdfs
2019-08-21 15:48:54,967 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(85)) - Initializing Timeline metrics sink.
2019-08-21 15:48:54,969 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(105)) - Identified hostname = gaian-lap386.com, serviceName = namenode
2019-08-21 15:48:55,515 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(133)) - No suitable collector found.
2019-08-21 15:48:55,522 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(185)) - RPC port properties configured: {8020=client}
2019-08-21 15:48:55,551 INFO  impl.MetricsSinkAdapter (MetricsSinkAdapter.java:start(204)) - Sink timeline started
2019-08-21 15:48:55,796 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(374)) - Scheduled Metric snapshot period at 10 second(s).
2019-08-21 15:48:55,797 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:start(191)) - NameNode metrics system started
2019-08-21 15:48:55,949 INFO  namenode.NameNodeUtils (NameNodeUtils.java:getClientNamenodeAddress(79)) - fs.defaultFS is hdfs://gaian-lap386.com:8020
2019-08-21 15:48:55,949 INFO  namenode.NameNode (NameNode.java:<init>(928)) - Clients should use gaian-lap386.com:8020 to access this namenode/service.
2019-08-21 15:48:56,719 INFO  util.JvmPauseMonitor (JvmPauseMonitor.java:run(188)) - Starting JVM pause monitor
2019-08-21 15:48:56,768 INFO  hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1605)) - Starting Web-server for hdfs at: <a href="http://gaian-lap386.com:50070" target="_blank">http://gaian-lap386.com:50070</a>
2019-08-21 15:48:56,832 INFO  util.log (Log.java:initialized(192)) - Logging initialized @4311ms
2019-08-21 15:48:57,297 INFO  server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-08-21 15:48:57,351 INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(81)) - Http request log for http.requests.namenode is not defined
2019-08-21 15:48:57,405 INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(968)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2019-08-21 15:48:57,428 INFO  http.HttpServer2 (HttpServer2.java:addFilter(941)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context hdfs
2019-08-21 15:48:57,428 INFO  http.HttpServer2 (HttpServer2.java:addFilter(951)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context static
2019-08-21 15:48:57,428 INFO  http.HttpServer2 (HttpServer2.java:addFilter(951)) - Added filter authentication (class=org.apache.hadoop.security.authentication.server.AuthenticationFilter) to context logs
2019-08-21 15:48:57,428 INFO  security.HttpCrossOriginFilterInitializer (HttpCrossOriginFilterInitializer.java:initFilter(49)) - CORS filter not enabled. Please set hadoop.http.cross-origin.enabled to 'true' to enable it
2019-08-21 15:48:57,547 INFO  http.HttpServer2 (NameNodeHttpServer.java:initWebHdfs(100)) - Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2019-08-21 15:48:57,547 INFO  http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(787)) - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
2019-08-21 15:48:57,588 INFO  http.HttpServer2 (HttpServer2.java:bindListener(1185)) - Jetty bound to port 50070
2019-08-21 15:48:57,589 INFO  server.Server (Server.java:doStart(351)) - jetty-9.3.24.v20180605, build timestamp: 2018-06-05T22:41:56+05:30, git hash: 84205aa28f11a4f31f2a3b86d1bba2cc8ab69827
2019-08-21 15:48:57,757 INFO  server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(240)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2019-08-21 15:48:57,769 INFO  handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@289710d9{/logs,file:///var/log/hadoop/hdfs/,AVAILABLE}
2019-08-21 15:48:57,770 INFO  handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.s.ServletContextHandler@3da30852{/static,file:///usr/hdp/3.1.0.0-78/hadoop-hdfs/webapps/static/,AVAILABLE}
2019-08-21 15:48:57,957 INFO  handler.ContextHandler (ContextHandler.java:doStart(781)) - Started o.e.j.w.WebAppContext@72c28d64{/,file:///usr/hdp/3.1.0.0-78/hadoop-hdfs/webapps/hdfs/,AVAILABLE}{/hdfs}
2019-08-21 15:48:57,974 INFO  server.AbstractConnector (AbstractConnector.java:doStart(278)) - Started ServerConnector@65045a87{HTTP/1.1,[http/1.1]}{gaian-lap386.com:50070}
2019-08-21 15:48:57,975 INFO  server.Server (Server.java:doStart(419)) - Started @5455ms
2019-08-21 15:48:58,788 WARN  common.Util (Util.java:stringAsURI(99)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-08-21 15:48:58,789 WARN  common.Util (Util.java:stringAsURI(99)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-08-21 15:48:58,790 WARN  namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(680)) - Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2019-08-21 15:48:58,790 WARN  namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(685)) - Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
2019-08-21 15:48:58,801 WARN  common.Util (Util.java:stringAsURI(99)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-08-21 15:48:58,801 WARN  common.Util (Util.java:stringAsURI(99)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2019-08-21 15:48:58,835 WARN  common.Storage (NNStorage.java:setRestoreFailedStorage(223)) - set restore failed storage to true
2019-08-21 15:48:58,919 INFO  namenode.FSEditLog (FSEditLog.java:newInstance(227)) - Edit logging is async:true
2019-08-21 15:48:58,964 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(749)) - KeyProvider: null
2019-08-21 15:48:58,971 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(752)) - Enabling async auditlog
2019-08-21 15:48:59,009 INFO  namenode.FSNamesystem (FSNamesystemLock.java:<init>(122)) - fsLock is fair: false
2019-08-21 15:48:59,019 INFO  namenode.FSNamesystem (FSNamesystemLock.java:<init>(138)) - Detailed lock hold time metrics enabled: false
2019-08-21 15:48:59,033 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(774)) - fsOwner             = hdfs (auth:SIMPLE)
2019-08-21 15:48:59,033 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(775)) - supergroup          = hdfs
2019-08-21 15:48:59,033 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(776)) - isPermissionEnabled = true
2019-08-21 15:48:59,034 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(787)) - HA Enabled: false
2019-08-21 15:48:59,124 INFO  blockmanagement.HeartbeatManager (HeartbeatManager.java:<init>(84)) - Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-interval
2019-08-21 15:48:59,129 INFO  common.Util (Util.java:isDiskStatsEnabled(395)) - dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
2019-08-21 15:48:59,160 INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(301)) - dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
2019-08-21 15:48:59,160 INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(309)) - dfs.namenode.datanode.registration.ip-hostname-check=true
2019-08-21 15:48:59,182 INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(79)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:01:00:00.000
2019-08-21 15:48:59,183 INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(85)) - The block deletion will start around 2019 Aug 21 16:48:59
2019-08-21 15:48:59,189 INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map BlocksMap
2019-08-21 15:48:59,189 INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2019-08-21 15:48:59,191 INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 2.0% max memory 1011.3 MB = 20.2 MB
2019-08-21 15:48:59,192 INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^21 = 2097152 entries
2019-08-21 15:48:59,233 INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(579)) - dfs.block.access.token.enable = true
2019-08-21 15:48:59,233 INFO  blockmanagement.BlockManager (BlockManager.java:createBlockTokenSecretManager(601)) - dfs.block.access.key.update.interval=600 min(s), dfs.block.access.token.lifetime=600 min(s), dfs.encrypt.data.transfer.algorithm=null
2019-08-21 15:48:59,433 INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(161)) - dfs.namenode.safemode.threshold-pct = 1.0
2019-08-21 15:48:59,438 INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(162)) - dfs.namenode.safemode.min.datanodes = 0
019-08-21 15:48:59,438 INFO  blockmanagement.BlockManagerSafeMode (BlockManagerSafeMode.java:<init>(164)) - dfs.namenode.safemode.extension = 30000
2019-08-21 15:48:59,439 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(565)) - defaultReplication         = 3
2019-08-21 15:48:59,439 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(566)) - maxReplication             = 50
2019-08-21 15:48:59,439 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(567)) - minReplication             = 1
2019-08-21 15:48:59,439 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(568)) - maxReplicationStreams      = 2
2019-08-21 15:48:59,440 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(569)) - redundancyRecheckInterval  = 3000ms
2019-08-21 15:48:59,440 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(570)) - encryptDataTransfer        = false
2019-08-21 15:48:59,440 INFO  blockmanagement.BlockManager (BlockManager.java:<init>(571)) - maxNumBlocksToLog          = 1000
2019-08-21 15:48:59,612 INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map INodeMap
2019-08-21 15:48:59,612 INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2019-08-21 15:48:59,617 INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 1.0% max memory 1011.3 MB = 10.1 MB
2019-08-21 15:48:59,617 INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^20 = 1048576 entries
2019-08-21 15:48:59,618 INFO  namenode.FSDirectory (FSDirectory.java:<init>(287)) - ACLs enabled? true
2019-08-21 15:48:59,618 INFO  namenode.FSDirectory (FSDirectory.java:<init>(291)) - POSIX ACL inheritance enabled? true
2019-08-21 15:48:59,618 INFO  namenode.FSDirectory (FSDirectory.java:<init>(295)) - XAttrs enabled? true
2019-08-21 15:48:59,618 INFO  namenode.NameNode (FSDirectory.java:<init>(359)) - Caching file names occurring more than 10 times
2019-08-21 15:48:59,630 INFO  snapshot.SnapshotManager (SnapshotManager.java:<init>(124)) - Loaded config captureOpenFiles: false, skipCaptureAccessTimeOnlyChange: false, snapshotDiffAllowSnapRootDescendant: true, maxSnapshotLimit: 65536
2019-08-21 15:48:59,634 INFO  snapshot.SnapshotManager (DirectoryDiffListFactory.java:init(43)) - SkipList is disabled
2019-08-21 15:48:59,643 INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map cachedBlocks
2019-08-21 15:48:59,643 INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2019-08-21 15:48:59,645 INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.25% max memory 1011.3 MB = 2.5 MB
2019-08-21 15:48:59,645 INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^18 = 262144 entries
2019-08-21 15:48:59,659 INFO  metrics.TopMetrics (TopMetrics.java:logConf(76)) - NNTop conf: dfs.namenode.top.window.num.buckets = 10
2019-08-21 15:48:59,662 INFO  metrics.TopMetrics (TopMetrics.java:logConf(78)) - NNTop conf: dfs.namenode.top.num.users = 10
2019-08-21 15:48:59,662 INFO  metrics.TopMetrics (TopMetrics.java:logConf(80)) - NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
2019-08-21 15:48:59,670 INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(983)) - Retry cache on namenode is enabled
2019-08-21 15:48:59,671 INFO  namenode.FSNamesystem (FSNamesystem.java:initRetryCache(991)) - Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
2019-08-21 15:48:59,677 INFO  util.GSet (LightWeightGSet.java:computeCapacity(395)) - Computing capacity for map NameNodeRetryCache
2019-08-21 15:48:59,677 INFO  util.GSet (LightWeightGSet.java:computeCapacity(396)) - VM type       = 64-bit
2019-08-21 15:48:59,679 INFO  util.GSet (LightWeightGSet.java:computeCapacity(397)) - 0.029999999329447746% max memory 1011.3 MB = 310.7 KB
2019-08-21 15:48:59,680 INFO  util.GSet (LightWeightGSet.java:computeCapacity(402)) - capacity      = 2^15 = 32768 entries
2019-08-21 15:48:59,780 INFO  common.Storage (Storage.java:tryLock(905)) - Lock on /hadoop/hdfs/namenode/in_use.lock acquired by nodename 10841@gaian-lap386.com
2019-08-21 15:48:59,784 WARN  namenode.FSNamesystem (FSNamesystem.java:loadFromDisk(716)) - Encountered exception loading fsimage
java.io.FileNotFoundException: /hadoop/hdfs/namenode/current/VERSION (Permission denied)
        at java.io.RandomAccessFile.open0(Native Method)
        at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
        at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
        at org.apache.hadoop.hdfs.server.common.StorageInfo.readPropertiesFile(StorageInfo.java:250)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.readProperties(NNStorage.java:660)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:388)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:227)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1090)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:937)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:910)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
2019-08-21 15:48:59,799 INFO  handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.w.WebAppContext@72c28d64{/,null,UNAVAILABLE}{/hdfs}
2019-08-21 15:48:59,940 INFO  server.AbstractConnector (AbstractConnector.java:doStop(318)) - Stopped ServerConnector@65045a87{HTTP/1.1,[http/1.1]}{gaian-lap386.com:50070}
2019-08-21 15:48:59,940 INFO  handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@3da30852{/static,file:///usr/hdp/3.1.0.0-78/hadoop-hdfs/webapps/static/,UNAVAILABLE}
2019-08-21 15:48:59,948 INFO  handler.ContextHandler (ContextHandler.java:doStop(910)) - Stopped o.e.j.s.ServletContextHandler@289710d9{/logs,file:///var/log/hadoop/hdfs/,UNAVAILABLE}
2019-08-21 15:49:00,068 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(210)) - Stopping NameNode metrics system...
2019-08-21 15:49:00,069 INFO  impl.MetricsSinkAdapter (MetricsSinkAdapter.java:publishMetricsFromQueue(141)) - timeline thread interrupted.
2019-08-21 15:49:00,071 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:stop(216)) - NameNode metrics system stopped.
2019-08-21 15:49:00,072 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:shutdown(607)) - NameNode metrics system shutdown complete.
2019-08-21 15:49:00,072 ERROR namenode.NameNode (NameNode.java:main(1715)) - Failed to start namenode.
java.io.FileNotFoundException: /hadoop/hdfs/namenode/current/VERSION (Permission denied)
        at java.io.RandomAccessFile.open0(Native Method)
        at java.io.RandomAccessFile.open(RandomAccessFile.java:316)
        at java.io.RandomAccessFile.<init>(RandomAccessFile.java:243)
        at org.apache.hadoop.hdfs.server.common.StorageInfo.readPropertiesFile(StorageInfo.java:250)
        at org.apache.hadoop.hdfs.server.namenode.NNStorage.readProperties(NNStorage.java:660)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:388)
        at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:227)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1090)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:714)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:632)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:694)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:937)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:910)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1643)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1710)
2019-08-21 15:49:00,083 INFO  util.ExitUtil (ExitUtil.java:terminate(210)) - Exiting with status 1: java.io.FileNotFoundException: /hadoop/hdfs/namenode/current/VERSION (Permission denied)
2019-08-21 15:49:00,095 INFO  timeline.HadoopTimelineMetricsSink (AbstractTimelineMetricsSink.java:getCurrentCollectorHost(291)) - No live collector to send metrics to. Metrics to be sent will be discarded. This message will be skipped for the next 20 times.
2019-08-21 15:49:00,118 INFO  namenode.NameNode (LogAdapter.java:info(51)) - SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at gaian-lap386.com/192.168.24.32
************************************************************/>

avatar
Master Mentor

@Manoj690 
We see the error as following:

java.io.FileNotFoundException: /hadoop/hdfs/namenode/current/VERSION (Permission denied)

 So that is causing the NameNode startup failure.

Please check and fix the permission on the mentioned file so that the Owner of the NameNode process will have read / write access to this file.

 

Normally it is owned by hdfs user.  

Example:

# ls -l /hadoop/hdfs/namenode/current/VERSION
-rw-r--r--. 1 hdfs hadoop 206 Aug 13 23:10 /hadoop/hdfs/namenode/current/VERSION

 

avatar
Contributor

i got the same out put

 

ls -l /hadoop/hdfs/namenode/current/VERSION
-rw-r--r-- 1 root root 218 Aug 20 15:27 /hadoop/hdfs/namenode/current/VERSION

avatar
Master Mentor

@Manoj690 

Not the same output as mine. Please notice that,  In your case that file is being owned by "root" user.  Not the "hdfs" user. 

So please try this:

 

 

# chown hdfs:hadoop /hadoop/hdfs/namenode/current/VERSION

(OR recursively)
# chown -R hdfs:hadoop /hadoop/hdfs/