Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Unable to start services on Ambari, even after sucessfull installation of all different services. Error is :Connection failed to http://hcebdrdp.hansacequity.com:8188/ws/v1/timeline ()

Unable to start services on Ambari, even after sucessfull installation of all different services. Error is :Connection failed to http://hcebdrdp.hansacequity.com:8188/ws/v1/timeline ()

New Contributor
8 REPLIES 8

Re: Unable to start services on Ambari, even after sucessfull installation of all different services. Error is :Connection failed to http://hcebdrdp.hansacequity.com:8188/ws/v1/timeline ()

Super Mentor

@Rajat Inderiya

1. Do you see any error message in your ambari-server.log

2. Any error messages in ambari-agent.log?

3. Are you able to start any of the installed service manually?

Re: Unable to start services on Ambari, even after sucessfull installation of all different services. Error is :Connection failed to http://hcebdrdp.hansacequity.com:8188/ws/v1/timeline ()

New Contributor

Thanks Jay for the reply,

Yes error message is :

Connection failed to http://hcebdrds.hansacequity.com:50070 (<urlopen error [Errno 111] Connection refused>)

Exact error log is attached in image: capture4

For each services the error message is the above and the port is the one which that service is using.

Yes, I was able to start 3 services manually using ambari for example:NFS Gateway, HST agent, activity explorer.

Is it something related to port or some failure in installation.capture3.jpgcapture4.jpg

Re: Unable to start services on Ambari, even after sucessfull installation of all different services. Error is :Connection failed to http://hcebdrdp.hansacequity.com:8188/ws/v1/timeline ()

Super Mentor

@Rajat Inderiya

50070 is the NameNode http Port. So you should first check if the HDFS configurations has the correct entry for "dfs.namenode.http-address" ?

There are no Firewall restrictions in accessing those ports from ambari hosts or from other nodes of the cluster. Every Node of your cluster are able to resolve each other's Hostname.

Also please check if the cluster hosts aer resolving the "FQDN" (hostname -f) properly.
# hostname -f

.

You can refer to the following link to see if you are able to start the NameNode properly (Manually) https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.3/bk_upgrading_hdp_manually/content/start-had...

After starting the Name Node manually please chekc if it has opened the port 50070 properly or not? And if there are any errors observed in the NameNode logs?

# cat /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid 
# ps -ef | grep NameNode
# netstat -tnlpa | grep 50070

Also do you see any OS related issue like Do you have enough RAM on your host to run the services?

Also it will be good to check if your Hosts are meeting the minimum Hardware requirements : https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-installation/content/meet_minimum...

.

Re: Unable to start services on Ambari, even after sucessfull installation of all different services. Error is :Connection failed to http://hcebdrdp.hansacequity.com:8188/ws/v1/timeline ()

New Contributor

Hi Jay sen,

1) Can you please help me how can i check if the HDFS configurations has the correct entry for "dfs.namenode.http-address"

2) Yes, there are no firwall restrictions (Iptables are disabled) and every node of the cluster is able to resolve each other's hostname i.e, there are entries for all hosts in cat /etc/hosts.

3) Yes, CLusters are resolving FQDN

Starting Namenode manually: Yes i tried starting manually. But still the port 50070 is not opened.

1) ps -ef | grep Namenode gives below output:

root 12625 12611 0 19:06 pts/21 00:00:00 grep Namenode

2) netstat -tnlpa | grep 50070 is executing through command line.

OS Related Issue:

I don't think any OS related issue. I have two machines both 8GB RAM, 4 core and 500GB disk space. As of now i have just installed the services.

Yes, System is meeting all the minimum requirement as per mentioned in documentation.

Below is the error log when i try to start the namenode service using ambari. I am getting errors for all other components as well. I am posting the error log for name node as a reference.

Can you figure out any possible reason behind the error through this log?

--------------------------Error Log---------------------------------------

stderr: /var/lib/ambari-agent/data/errors-515.txt

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 420, in <module>
    NameNode().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 101, in start
    upgrade_suspended=params.upgrade_suspended, env=env)
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 215, in namenode
    create_hdfs_directories()
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 282, in create_hdfs_directories
    mode=0777,
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 459, in action_create_on_execute
    self.action_delayed("create")
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 456, in action_delayed
    self.get_hdfs_resource_executor().action_delayed(action_name, self)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 247, in action_delayed
    self._assert_valid()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 231, in _assert_valid
    self.target_status = self._get_file_status(target)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 292, in _get_file_status
    list_status = self.util.run_command(target, 'GETFILESTATUS', method='GET', ignore_status_codes=['404'], assertable_result=False)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 192, in run_command
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X GET 'http://hcebdrds.hansacequity.com:50070/webhdfs/v1/tmp?op=GETFILESTATUS&user.name=hdfs'' returned status_code=.

Re: Unable to start services on Ambari, even after sucessfull installation of all different services. Error is :Connection failed to http://hcebdrdp.hansacequity.com:8188/ws/v1/timeline ()

New Contributor

Hi , @Jay SenSharma

1) Can you please help me how can i check if the HDFS configurations has the correct entry for "dfs.namenode.http-address"

2) Yes, there are no firwall restrictions (Iptables are disabled) and every node of the cluster is able to resolve each other's hostname i.e, there are entries for all hosts in cat /etc/hosts.

3) Yes, CLusters are resolving FQDN

Starting Namenode manually: Yes i tried starting manually. But still the port 50070 is not opened.

1) ps -ef | grep Namenode gives below output:

root 12625 12611 0 19:06 pts/21 00:00:00 grep Namenode

2) netstat -tnlpa | grep 50070 is executing through command line.

OS Related Issue:

I don't think any OS related issue. I have two machines both 8GB RAM, 4 core and 500GB disk space. As of now i have just installed the services.

Yes, System is meeting all the minimum requirement as per mentioned in documentation.

Below is the error log when i try to start the namenode service using ambari. I am getting same kind of error for all other services too Can you figure out any possible reason behind the error through this log?

Thanks for the help. Please revert in case you find any possible solution.

--------------------------------Error log-------------------------------------------------------------------

stderr: /var/lib/ambari-agent/data/errors-515.txt

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 420, in <module>
    NameNode().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/namenode.py", line 101, in start
    upgrade_suspended=params.upgrade_suspended, env=env)
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 215, in namenode
    create_hdfs_directories()
  File "/var/lib/ambari-agent/cache/common-services/HDFS/2.1.0.2.0/package/scripts/hdfs_namenode.py", line 282, in create_hdfs_directories
    mode=0777,
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 459, in action_create_on_execute
    self.action_delayed("create")
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 456, in action_delayed
    self.get_hdfs_resource_executor().action_delayed(action_name, self)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 247, in action_delayed
    self._assert_valid()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 231, in _assert_valid
    self.target_status = self._get_file_status(target)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 292, in _get_file_status
    list_status = self.util.run_command(target, 'GETFILESTATUS', method='GET', ignore_status_codes=['404'], assertable_result=False)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 192, in run_command
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X GET 'http://hcebdrds.hansacequity.com:50070/webhdfs/v1/tmp?op=GETFILESTATUS&user.name=hdfs'' returned status_code=.

stdout: /var/lib/ambari-agent/data/output-515.txt

2017-06-19 18:54:23,349 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.3.0-37
2017-06-19 18:54:23,349 - Checking if need to create versioned conf dir /etc/hadoop/2.5.3.0-37/0
2017-06-19 18:54:23,350 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-06-19 18:54:23,414 - call returned (1, '/etc/hadoop/2.5.3.0-37/0 exist already', '')
2017-06-19 18:54:23,414 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-06-19 18:54:23,473 - checked_call returned (0, '')
2017-06-19 18:54:23,474 - Ensuring that hadoop has the correct symlink structure
2017-06-19 18:54:23,475 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-06-19 18:54:23,720 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.3.0-37
2017-06-19 18:54:23,721 - Checking if need to create versioned conf dir /etc/hadoop/2.5.3.0-37/0
2017-06-19 18:54:23,721 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-06-19 18:54:23,784 - call returned (1, '/etc/hadoop/2.5.3.0-37/0 exist already', '')
2017-06-19 18:54:23,785 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-06-19 18:54:23,842 - checked_call returned (0, '')
2017-06-19 18:54:23,844 - Ensuring that hadoop has the correct symlink structure
2017-06-19 18:54:23,844 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-06-19 18:54:23,847 - Group['livy'] {}
2017-06-19 18:54:23,850 - Group['spark'] {}
2017-06-19 18:54:23,851 - Group['hadoop'] {}
2017-06-19 18:54:23,851 - Group['users'] {}
2017-06-19 18:54:23,852 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-19 18:54:23,854 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-19 18:54:23,855 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-19 18:54:23,857 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-06-19 18:54:23,859 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-19 18:54:23,860 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-06-19 18:54:23,862 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-19 18:54:23,863 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-19 18:54:23,865 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-06-19 18:54:23,867 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-19 18:54:23,868 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-19 18:54:23,870 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-19 18:54:23,871 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-19 18:54:23,873 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-19 18:54:23,874 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-19 18:54:23,876 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-19 18:54:23,878 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-06-19 18:54:23,882 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-06-19 18:54:23,890 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-06-19 18:54:23,891 - Group['hdfs'] {}
2017-06-19 18:54:23,892 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2017-06-19 18:54:23,894 - FS Type: 
2017-06-19 18:54:23,894 - Directory['/etc/hadoop'] {'mode': 0755}
2017-06-19 18:54:23,937 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-06-19 18:54:23,938 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-06-19 18:54:23,965 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2017-06-19 18:54:23,976 - Skipping Execute[('setenforce', '0')] due to not_if
2017-06-19 18:54:23,977 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2017-06-19 18:54:23,982 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2017-06-19 18:54:23,983 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2017-06-19 18:54:23,998 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2017-06-19 18:54:24,003 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2017-06-19 18:54:24,004 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2017-06-19 18:54:24,038 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2017-06-19 18:54:24,040 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2017-06-19 18:54:24,043 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2017-06-19 18:54:24,055 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2017-06-19 18:54:24,062 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2017-06-19 18:54:24,372 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.3.0-37
2017-06-19 18:54:24,373 - Checking if need to create versioned conf dir /etc/hadoop/2.5.3.0-37/0
2017-06-19 18:54:24,373 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-06-19 18:54:24,437 - call returned (1, '/etc/hadoop/2.5.3.0-37/0 exist already', '')
2017-06-19 18:54:24,437 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-06-19 18:54:24,500 - checked_call returned (0, '')
2017-06-19 18:54:24,501 - Ensuring that hadoop has the correct symlink structure
2017-06-19 18:54:24,502 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-06-19 18:54:24,503 - Stack Feature Version Info: stack_version=2.5, version=2.5.3.0-37, current_cluster_version=2.5.3.0-37 -> 2.5.3.0-37
2017-06-19 18:54:24,508 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.3.0-37
2017-06-19 18:54:24,509 - Checking if need to create versioned conf dir /etc/hadoop/2.5.3.0-37/0
2017-06-19 18:54:24,510 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-06-19 18:54:24,576 - call returned (1, '/etc/hadoop/2.5.3.0-37/0 exist already', '')
2017-06-19 18:54:24,577 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-06-19 18:54:24,642 - checked_call returned (0, '')
2017-06-19 18:54:24,644 - Ensuring that hadoop has the correct symlink structure
2017-06-19 18:54:24,644 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-06-19 18:54:24,659 - checked_call['rpm -q --queryformat '%{version}-%{release}' hdp-select | sed -e 's/\.el[0-9]//g''] {'stderr': -1}
2017-06-19 18:54:24,744 - checked_call returned (0, '2.5.3.0-37', '')
2017-06-19 18:54:24,757 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2017-06-19 18:54:24,773 - File['/etc/security/limits.d/hdfs.conf'] {'content': Template('hdfs.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2017-06-19 18:54:24,774 - XmlConfig['hadoop-policy.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-06-19 18:54:24,801 - Generating config: /usr/hdp/current/hadoop-client/conf/hadoop-policy.xml
2017-06-19 18:54:24,801 - File['/usr/hdp/current/hadoop-client/conf/hadoop-policy.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-06-19 18:54:24,825 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-06-19 18:54:24,848 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-client.xml
2017-06-19 18:54:24,849 - File['/usr/hdp/current/hadoop-client/conf/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-06-19 18:54:24,864 - Directory['/usr/hdp/current/hadoop-client/conf/secure'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'cd_access': 'a'}
2017-06-19 18:54:24,866 - XmlConfig['ssl-client.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf/secure', 'configuration_attributes': {}, 'configurations': ...}
2017-06-19 18:54:24,888 - Generating config: /usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml
2017-06-19 18:54:24,889 - File['/usr/hdp/current/hadoop-client/conf/secure/ssl-client.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-06-19 18:54:24,905 - XmlConfig['ssl-server.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2017-06-19 18:54:24,927 - Generating config: /usr/hdp/current/hadoop-client/conf/ssl-server.xml
2017-06-19 18:54:24,928 - File['/usr/hdp/current/hadoop-client/conf/ssl-server.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-06-19 18:54:24,946 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {'final': {'dfs.support.append': 'true', 'dfs.datanode.data.dir': 'true', 'dfs.namenode.http-address': 'true', 'dfs.namenode.name.dir': 'true', 'dfs.webhdfs.enabled': 'true', 'dfs.datanode.failed.volumes.tolerated': 'true'}}, 'configurations': ...}
2017-06-19 18:54:24,957 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2017-06-19 18:54:24,958 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2017-06-19 18:54:25,008 - XmlConfig['core-site.xml'] {'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'mode': 0644, 'configuration_attributes': {'final': {'fs.defaultFS': 'true'}}, 'owner': 'hdfs', 'configurations': ...}
2017-06-19 18:54:25,018 - Generating config: /usr/hdp/current/hadoop-client/conf/core-site.xml
2017-06-19 18:54:25,018 - File['/usr/hdp/current/hadoop-client/conf/core-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': 0644, 'encoding': 'UTF-8'}
2017-06-19 18:54:25,045 - File['/usr/hdp/current/hadoop-client/conf/slaves'] {'content': Template('slaves.j2'), 'owner': 'hdfs'}
2017-06-19 18:54:25,047 - Directory['/DATA/hadoop/hdfs/namenode'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True, 'mode': 0755, 'cd_access': 'a'}
2017-06-19 18:54:25,048 - Called service start with upgrade_type: None
2017-06-19 18:54:25,048 - Ranger admin not installed
2017-06-19 18:54:25,049 - /DATA/hadoop/hdfs/namenode/namenode-formatted/ exists. Namenode DFS already formatted
2017-06-19 18:54:25,049 - Directory['/DATA/hadoop/hdfs/namenode/namenode-formatted/'] {'create_parents': True}
2017-06-19 18:54:25,051 - File['/etc/hadoop/conf/dfs.exclude'] {'owner': 'hdfs', 'content': Template('exclude_hosts_list.j2'), 'group': 'hadoop'}
2017-06-19 18:54:25,051 - Options for start command are: 
2017-06-19 18:54:25,052 - Directory['/var/run/hadoop'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 0755}
2017-06-19 18:54:25,052 - Changing owner for /var/run/hadoop from 0 to hdfs
2017-06-19 18:54:25,052 - Changing group for /var/run/hadoop from 0 to hadoop
2017-06-19 18:54:25,053 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2017-06-19 18:54:25,053 - Directory['/var/log/hadoop/hdfs'] {'owner': 'hdfs', 'group': 'hadoop', 'create_parents': True}
2017-06-19 18:54:25,054 - File['/var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'] {'action': ['delete'], 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2017-06-19 18:54:25,061 - Execute['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'ulimit -c unlimited ;  /usr/hdp/current/hadoop-client/sbin/hadoop-daemon.sh --config /usr/hdp/current/hadoop-client/conf start namenode''] {'environment': {'HADOOP_LIBEXEC_DIR': '/usr/hdp/current/hadoop-client/libexec'}, 'not_if': 'ambari-sudo.sh  -H -E test -f /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid && ambari-sudo.sh  -H -E pgrep -F /var/run/hadoop/hdfs/hadoop-hdfs-namenode.pid'}
2017-06-19 18:54:26,135 - Waiting for this NameNode to leave Safemode due to the following conditions: HA: False, isActive: True, upgradeType: None
2017-06-19 18:54:26,136 - Waiting up to 19 minutes for the NameNode to leave Safemode...
2017-06-19 18:54:26,137 - Execute['/usr/hdp/current/hadoop-hdfs-namenode/bin/hdfs dfsadmin -fs hdfs://hcebdrds.hansacequity.com:8020 -safemode get | grep 'Safe mode is OFF''] {'logoutput': True, 'tries': 115, 'user': 'hdfs', 'try_sleep': 10}
stty: standard input: Inappropriate ioctl for device
[hdfs@HCEDBSRV05 ~]$ 2017-06-19 18:54:27,211 - HdfsResource['/tmp'] {'security_enabled': False, 'hadoop_bin_dir': '/usr/hdp/current/hadoop-client/bin', 'keytab': [EMPTY], 'dfs_type': '', 'default_fs': 'hdfs://hcebdrds.hansacequity.com:8020', 'hdfs_resource_ignore_file': '/var/lib/ambari-agent/data/.hdfs_resource_ignore', 'hdfs_site': ..., 'kinit_path_local': '/usr/bin/kinit', 'principal_name': None, 'user': 'hdfs', 'owner': 'hdfs', 'hadoop_conf_dir': '/usr/hdp/current/hadoop-client/conf', 'type': 'directory', 'action': ['create_on_execute'], 'immutable_paths': [u'/apps/hive/warehouse', u'/mr-history/done', u'/app-logs', u'/tmp'], 'mode': 0777}
2017-06-19 18:54:27,217 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -sS -L -w '"'"'%{http_code}'"'"' -X GET '"'"'http://hcebdrds.hansacequity.com:50070/webhdfs/v1/tmp?op=GETFILESTATUS&user.name=hdfs'"'"' 1>/tmp/tmpHY65Oo 2>/tmp/tmpl3QcJ8''] {'logoutput': None, 'quiet': False}
2017-06-19 18:54:28,288 - call returned (0, 'stty: standard input: Inappropriate ioctl for device\n[hdfs@HCEDBSRV05 ~]$ ')

Command failed after 1 tries

Re: Unable to start services on Ambari, even after sucessfull installation of all different services. Error is :Connection failed to http://hcebdrdp.hansacequity.com:8188/ws/v1/timeline ()

Super Mentor

@Rajat Inderiya

If you have run the following command from NameNode host then the output of the following command indicates that your NameNode is not running:

I see that your following command is not returning NameNode process. which means it is down.

ps -ef | grep Namenode
root 12625 12611 0 19:06 pts/21 00:00:00 grep Namenode

.

Because the NameNode process is down hence ambari is not able to make the following curl call successfully.

  File "/usr/lib/python2.6/site-packages/resource_management/libraries/providers/hdfs_resource.py", line 192, in run_command
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of 'curl -sS -L -w '%{http_code}' -X GET 'http://hcebdrds.hansacequity.com:50070/webhdfs/v1/tmp?op=GETFILESTATUS&user.name=hdfs'' returned status_code=.

.

- So first you should check the NameNode log to know why it is not running?

/var/log/hadoop/hdfs/hadoop-hdfs-namenode-xxxxxx.log
/var/log/hadoop/hdfs/hadoop-hdfs-namenode-xxxxxx.out

.

Re: Unable to start services on Ambari, even after sucessfull installation of all different services. Error is :Connection failed to http://hcebdrdp.hansacequity.com:8188/ws/v1/timeline ()

New Contributor

Hi @Jay Sen Sharma

I am not able to see any logs under /var/log/hadoop/hdfs directory.

The above directory is empty.

Re: Unable to start services on Ambari, even after sucessfull installation of all different services. Error is :Connection failed to http://hcebdrdp.hansacequity.com:8188/ws/v1/timeline ()

New Contributor

Hi @Jay sen sharma

I am not able to see any logs under /var/log/hadoop/hdfs directory.

The above directory is empty.

Don't have an account?
Coming from Hortonworks? Activate your account here