Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Cannot start App Timeline Server...Need help please :'(

avatar
Contributor

Hi, I am trying to start YARN Service in Ambari but it is giving error stated that I need to add resource before turning YARN to safe mode. Please find below details of stderr and stdout. Thanks.

stderr: /var/lib/ambari-agent/data/errors-3271.txt

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 94, in <module>
    ApplicationTimelineServer().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 44, in start
    self.configure(env) # FOR SECURITY
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 119, in locking_configure
    original_configure(obj, *args, **kw)
  File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/application_timeline_server.py", line 55, in configure
    yarn(name='apptimelineserver')
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/YARN/2.1.0.2.0/package/scripts/yarn.py", line 356, in yarn
    mode=0755
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 604, in action_create_on_execute
    self.action_delayed("create")
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 601, in action_delayed
    self.get_hdfs_resource_executor().action_delayed(action_name, self)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 337, in action_delayed
    self._set_mode(self.target_status)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 508, in _set_mode
    self.util.run_command(self.main_resource.resource.target, 'SETPERMISSION', method='PUT', permission=self.mode, assertable_result=False)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 177, in run_command
    return self._run_command(*args, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 248, in _run_command
    raise WebHDFSCallException(err_msg, result_dict)
resource_management.libraries.providers.hdfs_resource.WebHDFSCallException: Execution of 'curl -sS -L -w '%{http_code}' -X PUT 'http://slot2:50070/webhdfs/v1/ats/done?op=SETPERMISSION&user.name=hdfs&permission=755'' returned status_code=403. 
{
  "RemoteException": {
    "exception": "SafeModeException", 
    "javaClassName": "org.apache.hadoop.hdfs.server.namenode.SafeModeException", 
    "message": "Cannot set permission for /ats/done. Name node is in safe mode.\nResources are low on NN. Please add or free up more resources then turn off safe mode manually. NOTE:  If you turn off safe mode before adding resources, the NN will immediately return to safe mode. Use \"hdfs dfsadmin -safemode leave\" to turn safe mode off."
  }
}

stdout: /var/lib/ambari-agent/data/output-3271.txt

2017-11-26 21:02:30,897 - Stack Feature Version Info: Cluster Stack=2.5, Cluster Current Version=None, Command Stack=None, Command Version=2.5.3.0-37 -> 2.5.3.0-37
2017-11-26 21:02:30,919 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-26 21:02:31,118 - Stack Feature Version Info: Cluster Stack=2.5, Cluster Current Version=None, Command Stack=None, Command Version=2.5.3.0-37 -> 2.5.3.0-37
2017-11-26 21:02:31,127 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-11-26 21:02:31,128 - Group['metron'] {}
2017-11-26 21:02:31,129 - Group['livy'] {}
2017-11-26 21:02:31,129 - Group['elasticsearch'] {}
2017-11-26 21:02:31,129 - Group['spark'] {}
2017-11-26 21:02:31,130 - Group['zeppelin'] {}
2017-11-26 21:02:31,130 - Group['hadoop'] {}
2017-11-26 21:02:31,130 - Group['kibana'] {}
2017-11-26 21:02:31,130 - Group['users'] {}
2017-11-26 21:02:31,131 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,132 - call['/var/lib/ambari-agent/tmp/changeUid.sh hive'] {}
2017-11-26 21:02:31,145 - call returned (0, '1001')
2017-11-26 21:02:31,145 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1001}
2017-11-26 21:02:31,147 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,149 - call['/var/lib/ambari-agent/tmp/changeUid.sh storm'] {}
2017-11-26 21:02:31,162 - call returned (0, '1002')
2017-11-26 21:02:31,163 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1002}
2017-11-26 21:02:31,164 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,166 - call['/var/lib/ambari-agent/tmp/changeUid.sh zookeeper'] {}
2017-11-26 21:02:31,178 - call returned (0, '1003')
2017-11-26 21:02:31,178 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1003}
2017-11-26 21:02:31,180 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,182 - call['/var/lib/ambari-agent/tmp/changeUid.sh ams'] {}
2017-11-26 21:02:31,194 - call returned (0, '1004')
2017-11-26 21:02:31,194 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1004}
2017-11-26 21:02:31,196 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,198 - call['/var/lib/ambari-agent/tmp/changeUid.sh tez'] {}
2017-11-26 21:02:31,209 - call returned (0, '1005')
2017-11-26 21:02:31,210 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': 1005}
2017-11-26 21:02:31,212 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,214 - call['/var/lib/ambari-agent/tmp/changeUid.sh zeppelin'] {}
2017-11-26 21:02:31,225 - call returned (0, '1007')
2017-11-26 21:02:31,226 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'zeppelin', u'hadoop'], 'uid': 1007}
2017-11-26 21:02:31,228 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,230 - call['/var/lib/ambari-agent/tmp/changeUid.sh metron'] {}
2017-11-26 21:02:31,241 - call returned (0, '1008')
2017-11-26 21:02:31,242 - User['metron'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1008}
2017-11-26 21:02:31,244 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,245 - call['/var/lib/ambari-agent/tmp/changeUid.sh livy'] {}
2017-11-26 21:02:31,256 - call returned (0, '1009')
2017-11-26 21:02:31,257 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1009}
2017-11-26 21:02:31,259 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,260 - call['/var/lib/ambari-agent/tmp/changeUid.sh elasticsearch'] {}
2017-11-26 21:02:31,271 - call returned (0, '1010')
2017-11-26 21:02:31,272 - User['elasticsearch'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1010}
2017-11-26 21:02:31,274 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,275 - call['/var/lib/ambari-agent/tmp/changeUid.sh spark'] {}
2017-11-26 21:02:31,286 - call returned (0, '1019')
2017-11-26 21:02:31,287 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1019}
2017-11-26 21:02:31,288 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2017-11-26 21:02:31,290 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,291 - call['/var/lib/ambari-agent/tmp/changeUid.sh flume'] {}
2017-11-26 21:02:31,302 - call returned (0, '1011')
2017-11-26 21:02:31,303 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1011}
2017-11-26 21:02:31,304 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,306 - call['/var/lib/ambari-agent/tmp/changeUid.sh kafka'] {}
2017-11-26 21:02:31,317 - call returned (0, '1012')
2017-11-26 21:02:31,317 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1012}
2017-11-26 21:02:31,319 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,320 - call['/var/lib/ambari-agent/tmp/changeUid.sh hdfs'] {}
2017-11-26 21:02:31,331 - call returned (0, '1013')
2017-11-26 21:02:31,332 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1013}
2017-11-26 21:02:31,334 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,336 - call['/var/lib/ambari-agent/tmp/changeUid.sh yarn'] {}
2017-11-26 21:02:31,347 - call returned (0, '1014')
2017-11-26 21:02:31,347 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1014}
2017-11-26 21:02:31,349 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,350 - call['/var/lib/ambari-agent/tmp/changeUid.sh kibana'] {}
2017-11-26 21:02:31,361 - call returned (0, '1016')
2017-11-26 21:02:31,362 - User['kibana'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1016}
2017-11-26 21:02:31,364 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,366 - call['/var/lib/ambari-agent/tmp/changeUid.sh mapred'] {}
2017-11-26 21:02:31,377 - call returned (0, '1015')
2017-11-26 21:02:31,378 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1015}
2017-11-26 21:02:31,379 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,380 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2017-11-26 21:02:31,391 - call returned (0, '1017')
2017-11-26 21:02:31,392 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1017}
2017-11-26 21:02:31,394 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,395 - call['/var/lib/ambari-agent/tmp/changeUid.sh hcat'] {}
2017-11-26 21:02:31,406 - call returned (0, '1018')
2017-11-26 21:02:31,407 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': 1018}
2017-11-26 21:02:31,408 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,410 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-11-26 21:02:31,416 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2017-11-26 21:02:31,417 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-11-26 21:02:31,418 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,420 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-11-26 21:02:31,421 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2017-11-26 21:02:31,432 - call returned (0, '1017')
2017-11-26 21:02:31,433 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1017'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-11-26 21:02:31,439 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1017'] due to not_if
2017-11-26 21:02:31,440 - Group['hdfs'] {}
2017-11-26 21:02:31,440 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-11-26 21:02:31,441 - FS Type: 
2017-11-26 21:02:31,441 - Directory['/etc/hadoop'] {'mode': 0755}
2017-11-26 21:02:31,463 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-11-26 21:02:31,464 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-11-26 21:02:31,485 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2017-11-26 21:02:31,500 - Skipping Execute[('setenforce', '0')] due to only_if
2017-11-26 21:02:31,501 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2017-11-26 21:02:31,505 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2017-11-26 21:02:31,505 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2017-11-26 21:02:31,512 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2017-11-26 21:02:31,515 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2017-11-26 21:02:31,525 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-11-26 21:02:31,540 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2017-11-26 21:02:31,541 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2017-11-26 21:02:31,543 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2017-11-26 21:02:31,549 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2017-11-26 21:02:31,555 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2017-11-26 21:02:31,819 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-26 21:02:31,820 - Stack Feature Version Info: Cluster Stack=2.5, Cluster Current Version=None, Command Stack=None, Command Version=2.5.3.0-37 -> 2.5.3.0-37
2017-11-26 21:02:31,820 - call['ambari-python-wrap /usr/bin/hdp-select status hadoop-yarn-resourcemanager'] {'timeout': 20}
2017-11-26 21:02:31,855 - call returned (0, 'hadoop-yarn-resourcemanager - 2.5.3.0-37')
2017-11-26 21:02:31,887 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-11-26 21:02:31,904 - Directory['/var/log/hadoop-yarn/nodemanager/recovery-state'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-11-26 21:02:31,905 - Directory['/var/run/hadoop-yarn'] {'owner': 'yarn', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-26 21:02:31,906 - Directory['/var/run/hadoop-yarn/yarn'] {'owner': 'yarn', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-26 21:02:31,906 - Directory['/var/log/hadoop-yarn/yarn'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
2017-11-26 21:02:31,907 - Directory['/var/run/hadoop-mapreduce'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-26 21:02:31,907 - Directory['/var/run/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-26 21:02:31,908 - Directory['/var/log/hadoop-mapreduce'] {'owner': 'mapred', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-11-26 21:02:31,908 - Directory['/var/log/hadoop-mapreduce/mapred'] {'owner': 'mapred', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
2017-11-26 21:02:31,909 - Directory['/var/log/hadoop-yarn'] {'owner': 'yarn', 'group': 'hadoop', 'ignore_failures': True, 'create_parents': True, 'cd_access': 'a'}
2017-11-26 21:02:31,909 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2017-11-26 21:02:31,917 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2017-11-26 21:02:31,917 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-26 21:02:31,937 - XmlConfig['hdfs-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2017-11-26 21:02:31,944 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2017-11-26 21:02:31,944 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-26 21:02:31,986 - XmlConfig['mapred-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...}
2017-11-26 21:02:31,993 - Generating config: /usr/hdp/current/hadoop-client/conf/mapred-site.xml
2017-11-26 21:02:31,993 - File['/usr/hdp/current/hadoop-client/conf/mapred-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-26 21:02:32,021 - XmlConfig['yarn-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...}
2017-11-26 21:02:32,027 - Generating config: /usr/hdp/current/hadoop-client/conf/yarn-site.xml
2017-11-26 21:02:32,027 - File['/usr/hdp/current/hadoop-client/conf/yarn-site.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-26 21:02:32,090 - XmlConfig['capacity-scheduler.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {}, 'owner': 'yarn', 'configurations': ...}
2017-11-26 21:02:32,096 - Generating config: /usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml
2017-11-26 21:02:32,096 - File['/usr/hdp/current/hadoop-client/conf/capacity-scheduler.xml'] {'owner': 'yarn', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-11-26 21:02:32,105 - Directory['/hadoop/yarn/timeline'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
2017-11-26 21:02:32,105 - Directory['/hadoop/yarn/timeline'] {'owner': 'yarn', 'group': 'hadoop', 'create_parents': True, 'cd_access': 'a'}
2017-11-26 21:02:32,106 - HdfsResource['/ats/done'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://slot2:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': 'kinit', 'principal_name': [EMPTY], 'user': 'hdfs', 'change_permissions_for_parents': True, 'owner': 'yarn', 'group': 'hadoop', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0755}
2017-11-26 21:02:32,108 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://slot2:50070/webhdfs/v1/ats/done?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpt6S1E0 2>/tmp/tmpGLmVBg''] {'logoutput': None, 'quiet': False}
2017-11-26 21:02:32,408 - call returned (0, '')
2017-11-26 21:02:32,411 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X PUT '"'"'http://slot2:50070/webhdfs/v1/ats/done?op=SETPERMISSION&user.name=hdfs&permission=755'"'"' 1>/tmp/tmp7jV7NM 2>/tmp/tmp4y9X2E''] {'logoutput': None, 'quiet': False}
2017-11-26 21:02:32,664 - call returned (0, '')

Command failed after 1 tries
1 ACCEPTED SOLUTION

avatar
Master Mentor

@Ashikin

Based on the error message it looks like the component failed to start because namenodes are in safemode.


Please check if you can remove namenodes from safe mode as following and then try again.

Get the SafeMode state:

# su - hdfs
# hdfs dfsadmin -safemode get

Leave SafeMode:

# su - hdfs
# hdfs dfsadmin -safemode leave 


It is also better to look at the NameNode log to findout why it is in SafeMode. Is there any repetitive erorr/warning message inside the NameNode logs (if yes then please share the complete log)

.

View solution in original post

6 REPLIES 6

avatar
Master Mentor

@Ashikin

Based on the error message it looks like the component failed to start because namenodes are in safemode.


Please check if you can remove namenodes from safe mode as following and then try again.

Get the SafeMode state:

# su - hdfs
# hdfs dfsadmin -safemode get

Leave SafeMode:

# su - hdfs
# hdfs dfsadmin -safemode leave 


It is also better to look at the NameNode log to findout why it is in SafeMode. Is there any repetitive erorr/warning message inside the NameNode logs (if yes then please share the complete log)

.

avatar
Contributor

Hi Jay, I did the leave safe mode command but it still in safemode. Where can I access NameNode logs? Is it in /var/log/hadoop/hdfs?

avatar
Master Mentor

@Ashikin

Yes, the NameNode logs can be found inside the file with some name like (where xxxxx will be usually the hostname)

/var/log/hadoop/hdfs/hadoop-hdfs-namenode-xxxxxxx.log

.

It is better to take a look at both Active and StandBy NameNode logs.

Also you should try restarting the AppTimeline Server using command line once to see if it goes well, as described in :

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_reference/content/starting_hdp_services....

Execute this command on the timeline server:

su -l yarn -c "/usr/hdp/current/hadoop-yarn-timelineserver/sbin/yarn-daemon.sh start timelineserver"

.


avatar
Master Mentor

@Ashikin


Yes, the NameNode logs can be found in some file with name like: (here aaaaaaaaa will be usually your NameNode Hostname)

/var/log/hadoop/hdfs/hadoop-hdfs-namenode-aaaaaaaaa.log

It is better to take a look at both Active & StandBy NameNode logs.

.


It will be also a good idea to try starting the Application Timeline Server using command line once as described in the link: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_reference/content/starting_hdp_services....

Execute this command on the timeline server:

su -l yarn -c "/usr/hdp/current/hadoop-yarn-timelineserver/sbin/yarn-daemon.sh start timelineserver"

.

https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.2/bk_reference/content/starting_hdp_services....

avatar
Contributor

Thank you Jay. It works for me. 🙂

avatar
Contributor

Hi Jay, this is the log

2017-11-13 03:52:28,337 INFO  namenode.NameNode (LogAdapter.java:info(47)) - STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   user = hdfs
STARTUP_MSG:   host = hostname
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 2.7.3.2.5.3.0-37
STARTUP_MSG:   classpath = /usr/hdp/current/hadoop-client/conf:/usr/hdp/2.5.3.0-37/hadoop/lib/ojdbc6.jar:/usr/hdp/2.5.3.0-37/hadoop/lib/jackson-annotations-2.2.3.jar:/usr/hdp/2.5.3.0-37/hadoop/lib/ranger-hdfs-plugin-shim-0.6.0.2.5.3.0-3$
STARTUP_MSG:   build = git@github.com:hortonworks/hadoop.git -r 9828acfdec41a121f0121f556b09e2d112259e92; compiled by 'jenkins' on 2016-11-29T18:06Z
STARTUP_MSG:   java = 1.8.0_112
************************************************************/
2017-11-13 03:52:28,351 INFO  namenode.NameNode (LogAdapter.java:info(47)) - registered UNIX signal handlers for [TERM, HUP, INT]
2017-11-13 03:52:28,356 INFO  namenode.NameNode (NameNode.java:createNameNode(1600)) - createNameNode []
2017-11-13 03:52:28,567 INFO  impl.MetricsConfig (MetricsConfig.java:loadFirst(112)) - loaded properties from hadoop-metrics2.properties
2017-11-13 03:52:28,708 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(82)) - Initializing Timeline metrics sink.
2017-11-13 03:52:28,709 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(102)) - Identified hostname = slot2, serviceName = namenode
2017-11-13 03:52:28,813 INFO  availability.MetricCollectorHAHelper (MetricCollectorHAHelper.java:findLiveCollectorHostsFromZNode(79)) - /ambari-metrics-cluster znode does not exist. Skipping requesting live instances from zookeeper
2017-11-13 03:52:28,817 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(128)) - No suitable collector found.
2017-11-13 03:52:28,823 INFO  timeline.HadoopTimelineMetricsSink (HadoopTimelineMetricsSink.java:init(180)) - RPC port properties configured: {8020=client}
2017-11-13 03:52:28,833 INFO  impl.MetricsSinkAdapter (MetricsSinkAdapter.java:start(206)) - Sink timeline started
2017-11-13 03:52:28,903 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:startTimer(376)) - Scheduled snapshot period at 10 second(s).
2017-11-13 03:52:28,903 INFO  impl.MetricsSystemImpl (MetricsSystemImpl.java:start(192)) - NameNode metrics system started
2017-11-13 03:52:28,908 INFO  namenode.NameNode (NameNode.java:setClientNamenodeAddress(450)) - fs.defaultFS is hdfs://slot2:8020
2017-11-13 03:52:28,908 INFO  namenode.NameNode (NameNode.java:setClientNamenodeAddress(470)) - Clients are to use slot2:8020 to access this namenode/service.
2017-11-13 03:52:29,025 INFO  util.JvmPauseMonitor (JvmPauseMonitor.java:run(179)) - Starting JVM pause monitor
2017-11-13 03:52:29,032 INFO  hdfs.DFSUtil (DFSUtil.java:httpServerTemplateForNNAndJN(1780)) - Starting Web-server for hdfs at: http://slot2:50070
2017-11-13 03:52:29,072 INFO  mortbay.log (Slf4jLog.java:info(67)) - Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2017-11-13 03:52:29,078 INFO  server.AuthenticationFilter (AuthenticationFilter.java:constructSecretProvider(293)) - Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2017-11-13 03:52:29,082 INFO  http.HttpRequestLog (HttpRequestLog.java:getRequestLog(80)) - Http request log for http.requests.namenode is not defined
2017-11-13 03:52:29,086 INFO  http.HttpServer2 (HttpServer2.java:addGlobalFilter(754)) - Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2017-11-13 03:52:29,088 INFO  http.HttpServer2 (HttpServer2.java:addFilter(729)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
2017-11-13 03:52:29,088 INFO  http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
2017-11-13 03:52:29,088 INFO  http.HttpServer2 (HttpServer2.java:addFilter(737)) - Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
2017-11-13 03:52:29,089 INFO  security.HttpCrossOriginFilterInitializer (HttpCrossOriginFilterInitializer.java:initFilter(49)) - CORS filter not enabled. Please set hadoop.http.cross-origin.enabled to 'true' to enable it
2017-11-13 03:52:29,107 INFO  http.HttpServer2 (NameNodeHttpServer.java:initWebHdfs(93)) - Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
2017-11-13 03:52:29,108 INFO  http.HttpServer2 (HttpServer2.java:addJerseyResourcePackage(653)) - addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=$
2017-11-13 03:52:29,117 INFO  http.HttpServer2 (HttpServer2.java:openListeners(959)) - Jetty bound to port 50070
2017-11-13 03:52:29,117 INFO  mortbay.log (Slf4jLog.java:info(67)) - jetty-6.1.26.hwx
2017-11-13 03:52:29,224 INFO  mortbay.log (Slf4jLog.java:info(67)) - Started HttpServer2$SelectChannelConnectorWithSafeStartup@slot2:50070
2017-11-13 03:52:29,265 WARN  common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2017-11-13 03:52:29,265 WARN  common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2017-11-13 03:52:29,266 WARN  namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(656)) - Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
2017-11-13 03:52:29,266 WARN  namenode.FSNamesystem (FSNamesystem.java:checkConfiguration(661)) - Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage direc$
2017-11-13 03:52:29,269 WARN  common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2017-11-13 03:52:29,270 WARN  common.Util (Util.java:stringAsURI(56)) - Path /hadoop/hdfs/namenode should be specified as a URI in configuration files. Please update hdfs configuration.
2017-11-13 03:52:29,274 WARN  common.Storage (NNStorage.java:setRestoreFailedStorage(210)) - set restore failed storage to true
2017-11-13 03:52:29,291 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(725)) - No KeyProvider found.
2017-11-13 03:52:29,291 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(731)) - Enabling async auditlog
2017-11-13 03:52:29,292 INFO  namenode.FSNamesystem (FSNamesystem.java:<init>(735)) - fsLock is fair:false
2017-11-13 03:52:29,313 INFO  blockmanagement.HeartbeatManager (HeartbeatManager.java:<init>(90)) - Setting heartbeat recheck interval to 30000 since dfs.namenode.stale.datanode.interval is less than dfs.namenode.heartbeat.recheck-inter$
2017-11-13 03:52:29,321 INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(242)) - dfs.block.invalidate.limit=1000
2017-11-13 03:52:29,321 INFO  blockmanagement.DatanodeManager (DatanodeManager.java:<init>(248)) - dfs.namenode.datanode.registration.ip-hostname-check=true
2017-11-13 03:52:29,323 INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(71)) - dfs.namenode.startup.delay.block.deletion.sec is set to 000:01:00:00.000
2017-11-13 03:52:29,323 INFO  blockmanagement.BlockManager (InvalidateBlocks.java:printBlockDeletionTime(76)) - The block deletion will start around 2017 Nov 13 04:52:29
2017-11-13 03:52:29,324 INFO  util.GSet (LightWeightGSet.java:computeCapacity(354)) - Computing capacity for map BlocksMap
2017-11-13 03:52:29,324 INFO  util.GSet (LightWeightGSet.java:computeCapacity(355)) - VM type       = 64-bit
2017-11-13 03:52:29,326 INFO  util.GSet (LightWeightGSet.java:computeCapacity(356)) - 2.0% max memory 1011.3 MB = 20.2 MB
2017-11-13 03:52:29,326 INFO  util.GSet (LightWeightGSet.java:computeCapacity(361)) - capacity      = 2^21 = 2097152 entries