Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

HDP services installed om hadoop but not able to start them.

Highlighted

HDP services installed om hadoop but not able to start them.

Hi all,

I have installed all my required services of hadoop using ambari and they are vissible on ambari UI. But i am not able to start any services. Below is the error i am getting while running one of the service HIVE server.

It seems error is related to curl execution failed and inappropiate IOCTL. I am not able to identify why this error is coming and what should be the resolution for this.

I would be glad if anyone can help me with the resolution of this error.

------------------------------*****-----------------------------------------------

 File "/usr/lib/python2.6/site-packages/resource_management/libraries/functions/get_user_call_output.py", line 61, in get_user_call_output
    raise ExecutionFailed(err_msg, code, files_output[0], files_output[1])
resource_management.core.exceptions.ExecutionFailed: Execution of 'curl -sS -L -w '%{http_code}' -X GET 'http://hcebdrds.hansacequity.com:50070/webhdfs/v1/user/hcat?op=GETFILESTATUS&user.name=hdfs' 1>/tmp/tmpAuoKYI 2>/tmp/tmpscvKVh' returned 7. curl: (7) Failed connect to hcebdrds.hansacequity.com:50070; Connection refused
000

'hdfs://hcebdrds.hansacequity.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': 'missing_principal', 'user': 'hdfs', 'owner': 'hcat', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/hive/warehouse', u'/apps/falcon', u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0755}

2017-08-10 10:41:56,081 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://hcebdrds.hansacequity.com:50070/webhdfs/v1/user/hcat?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpAuoKYI 2>/tmp/tmpscvKVh''] {'logoutput': None, 'quiet': False}
2017-08-10 10:41:56,179 - call returned (7, 'stty: standard input: Inappropriate ioctl for device\n/home/hdfs/.bash_profile: line 20: [: too many arguments\nstdin: is not a tty')

Command failed after 1 tries

@Jay SenSharma @Benjamin Leonhardi @Josh Elser

3 REPLIES 3
Highlighted

Re: HDP services installed om hadoop but not able to start them.

Super Mentor

@Rajat Inderiya

The error we see here is as following:

hcebdrds.hansacequity.com:50070; Connection refused

.

So are you sure that the Hostname is getting resolved properly from ambari server?

Also there should be no Firewall restriction when ambari tries to access the NameNode port 50070 so can you please confirm if you are able to ping the port from ambari server host?

# telnet  hcebdrds.hansacequity.com 50070
(OR)
# nc  -v  hcebdrds.hansacequity.com 50070

.

If the port access has some issues then ambari will not be able to access the JMX of the NameNode even though the agent starts the service.

- So please login to NameNode host and then see if the agent has started the NameNode and the port 50070 is actually opened or not?

# netstat -tnlpa | grep 50070

- Also Do you see any error in the NameNode Log? If yes then can you please share.

.

Highlighted

Re: HDP services installed om hadoop but not able to start them.

@Jay SenSharma Thanks for the reply

Below is my response for the information you mentioned above.

1) Yes I am able to resolve fully qualified domain name from ambari server host.

#hostname -f

hcebdrdp.hansacequity.com

2) There is no firewall restriction. I have disabled the iptables

3) While trying to ping the port from ambari server, i am getting below error

# nc -v hcebdrds.hansacequity.com 50070

Ncat: Version 6.40 ( http://nmap.org/ncat )

Ncat: Connection refused.

4) netstat -tnlpa | grep 50070

This command is just getting executed on namenode host without any output.

raceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 367, in <module>
    NameNode().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 100, in start
    upgrade_suspended=params.upgrade_suspended, env=env)
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 226, in namenode
    create_hdfs_directories()
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 293, in create_hdfs_directories
    mode=0777,
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 555, in action_create_on_execute
    self.action_delayed("create")
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 552, in action_delayed
    self.get_hdfs_resource_executor().action_delayed(action_name, self)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 279, in action_delayed
    self._assert_valid()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 238, in _assert_valid
    self.target_status = self._get_file_status(target)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 381, in _get_file_status
    list_status = self.util.run_command(target, 'GETFILESTATUS', method='GET', ignore_status_codes=['404'], assertable_result=False)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 199, in run_command
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X GET 'http://hcebdrds.hansacequity.com:50070/webhdfs/v1/tmp?op=GETFILESTATUS&user.name=hdfs'' returned status_code=.

stdout: /var/lib/ambari-agent/data/output-296.txt

2017-08-10 17:52:04,280 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-08-10 17:52:04,665 - Stack Feature Version Info: stack_version=2.6, version=2.6.1.0-129, current_cluster_version=2.6.1.0-129 -> 2.6.1.0-129
2017-08-10 17:52:04,680 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
User Group mapping (user_group) is missing in the hostLevelParams
2017-08-10 17:52:04,682 - Group['livy'] {}
2017-08-10 17:52:04,684 - Group['spark'] {}
2017-08-10 17:52:04,684 - Group['zeppelin'] {}
2017-08-10 17:52:04,685 - Group['hadoop'] {}
2017-08-10 17:52:04,685 - Group['users'] {}
2017-08-10 17:52:04,685 - Group['knox'] {}
2017-08-10 17:52:04,686 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-08-10 17:52:04,687 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-08-10 17:52:04,689 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-08-10 17:52:04,690 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-08-10 17:52:04,691 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-08-10 17:52:04,692 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-08-10 17:52:04,693 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-08-10 17:52:04,694 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-08-10 17:52:04,696 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-08-10 17:52:04,697 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'zeppelin', u'hadoop']}
2017-08-10 17:52:04,698 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-08-10 17:52:04,699 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-08-10 17:52:04,700 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-08-10 17:52:04,702 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-08-10 17:52:04,703 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-08-10 17:52:04,704 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-08-10 17:52:04,705 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-08-10 17:52:04,706 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-08-10 17:52:04,707 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-08-10 17:52:04,709 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-08-10 17:52:04,710 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-08-10 17:52:04,711 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-08-10 17:52:04,712 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-08-10 17:52:04,713 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-08-10 17:52:04,716 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-08-10 17:52:04,723 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-08-10 17:52:04,724 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-08-10 17:52:04,726 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-08-10 17:52:04,727 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-08-10 17:52:04,734 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-08-10 17:52:04,735 - Group['hdfs'] {}
2017-08-10 17:52:04,736 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-08-10 17:52:04,737 - FS Type: 
2017-08-10 17:52:04,737 - Directory['/etc/hadoop'] {'mode': 0755}
2017-08-10 17:52:04,761 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-08-10 17:52:04,762 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-08-10 17:52:04,782 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2017-08-10 17:52:04,797 - Skipping Execute[('setenforce', '0')] due to only_if
2017-08-10 17:52:04,798 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2017-08-10 17:52:04,804 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2017-08-10 17:52:04,805 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2017-08-10 17:52:04,812 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2017-08-10 17:52:04,814 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2017-08-10 17:52:04,829 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-08-10 17:52:04,853 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-08-10 17:52:04,855 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2017-08-10 17:52:04,857 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2017-08-10 17:52:04,868 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2017-08-10 17:52:04,874 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2017-08-10 17:52:05,341 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-08-10 17:52:05,343 - Stack Feature Version Info: stack_version=2.6, version=2.6.1.0-129, current_cluster_version=2.6.1.0-129 -> 2.6.1.0-129
2017-08-10 17:52:05,403 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-08-10 17:52:05,442 - checked_call['rpm -q --queryformat '%{version}-%{release}' hdp-select | sed -e 's/\.el[0-9]//g''] {'stderr': -1}
2017-08-10 17:52:05,496 - checked_call returned (0, '2.6.1.0-129', '')
2017-08-10 17:52:05,519 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2017-08-10 17:52:05,535 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2017-08-10 17:52:05,538 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-08-10 17:52:05,565 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml
2017-08-10 17:52:05,565 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-08-10 17:52:05,590 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-08-10 17:52:05,613 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
2017-08-10 17:52:05,614 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-08-10 17:52:05,629 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-08-10 17:52:05,632 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2017-08-10 17:52:05,648 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
2017-08-10 17:52:05,648 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-08-10 17:52:05,655 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-08-10 17:52:05,665 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
2017-08-10 17:52:05,666 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-08-10 17:52:05,674 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2017-08-10 17:52:05,684 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2017-08-10 17:52:05,684 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-08-10 17:52:05,735 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'owner': 'hdfs', 'configurations': ...}
2017-08-10 17:52:05,745 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2017-08-10 17:52:05,746 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-08-10 17:52:05,775 - File['/usr/hdp/current/hadoop-client/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2017-08-10 17:52:05,780 - Directory['/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'create_parents': True, 'group': 'hadoop', 'mode': 0755, 'cd_access': 'a'}
2017-08-10 17:52:05,781 - Directory['/DATA/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-08-10 17:52:05,783 - Skipping setting up secure ZNode ACL for HFDS as it's supported only for NameNode HA mode.
2017-08-10 17:52:05,787 - Called service start with upgrade_type: None
2017-08-10 17:52:05,788 - Ranger Hdfs plugin is not enabled
2017-08-10 17:52:05,789 - File['/etc/hadoop/conf/dfs.exclude'] {'owner': 'hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'}
2017-08-10 17:52:05,790 - /hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted
2017-08-10 17:52:05,790 - /DATA/hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted
2017-08-10 17:52:05,791 - Directory['/hadoop/hdfs/namenode/namenode-formatted/'] {'create_parents': True}
2017-08-10 17:52:05,791 - Directory['/DATA/hadoop/hdfs/namenode/namenode-formatted/'] {'create_parents': True}
2017-08-10 17:52:05,791 - Options for start command are: 
2017-08-10 17:52:05,792 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2017-08-10 17:52:05,792 - Changing owner for /var/run/hadoop from 0 to hdfs
2017-08-10 17:52:05,792 - Changing group for /var/run/hadoop from 0 to hadoop
2017-08-10 17:52:05,792 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2017-08-10 17:52:05,793 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2017-08-10 17:52:05,793 - File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2017-08-10 17:52:05,800 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/current/hadoop-client/libexec'}, 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2017-08-10 17:52:07,100 - Waiting for this NameNode to leave Safemode due to the following conditions: HA: False, isActive: True, upgradeType: None
2017-08-10 17:52:07,101 - Waiting up to 19 minutes for the NameNode to leave Safemode...
2017-08-10 17:52:07,102 - Execute['/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://hcebdrds.hansacequity.com:8020 -safemode get | grep 'Safe mode is OFF''] {'logoutput': True, 'tries': 115, 'user': 'hdfs', 'try_sleep': 10}
stty: standard input: Inappropriate ioctl for device
[hdfs@hcebdrds ~]$ 2017-08-10 17:52:08,395 - HdfsResource['/tmp'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://hcebdrds.hansacequity.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': None, 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/hive/warehouse', u'/apps/falcon', u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0777}
2017-08-10 17:52:08,401 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://hcebdrds.hansacequity.com:50070/webhdfs/v1/tmp?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpTZl4Id 2>/tmp/tmpLy4X_B''] {'logoutput': None, 'quiet': False}
2017-08-10 17:52:09,689 - call returned (0, 'stty: standard input: Inappropriate ioctl for device\n[hdfs@hcebdrds ~]$ ')

Command failed after 1 tries
Highlighted

Re: HDP services installed om hadoop but not able to start them.

@Jay SenSharma

The above error log is error log for namenode.

Don't have an account?
Coming from Hortonworks? Activate your account here