Created 10-10-2017 03:08 PM
I have using Ambari Version 2.5.1.0 and HDP-2.6.2.0 when i am try to start grafana through Ambari UI I am getting below error.
ERROR:
stderr:-
Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana.py", line 79, in <module> AmsGrafana().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 329, in execute method(env) File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana.py", line 58, in start create_grafana_admin_pwd() File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana_util.py", line 252, in create_grafana_admin_pwd response = perform_grafana_get_call(GRAFANA_USER_URL, serverCall1) File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana_util.py", line 82, in perform_grafana_get_call raise Fail("Ambari Metrics Grafana update failed due to: %s" % str(ex)) resource_management.core.exceptions.Fail: Ambari Metrics Grafana update failed due to: [Errno 111] Connection refused
Created 10-10-2017 03:13 PM
As you are trying to freshly install Grafana hence the possible cause of the following "Connection Resfused" error might be fire wall.
File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana_util.py", line 252, in create_grafana_admin_pwd response = perform_grafana_get_call(GRAFANA_USER_URL, serverCall1) File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/metrics_grafana_util.py", line 82, in perform_grafana_get_call raise Fail("Ambari Metrics Grafana update failed due to: %s" % str(ex)) resource_management.core.exceptions.Fail: Ambari Metrics Grafana update failed due to: [Errno 111] Connection refused
.
Grafana by default uses port 3000. So please check if the Grafana Host has iptables (firewall) disabled. I mean the port 3000 need to be unblocked if it is blocked.
# netstat -tnlps | grep 3000 # netstat -tnlps | grep grafana # service iptables status # service iptables stop
.
And from a remote host of the same cluster try to see if the port 3000 is accessible or not?
# nc -v $GRAFANA_HOSTNAME 3000 (OR) # telnet $GRAFANA_HOSTNAME 3000
Also please check the Grafana log if you see any error there? As it looks like grafana connection refused messaged means ambari might have already attempted to start it ... so we might see some logs on the grafana host.
# less /var/log/ambari-metrics-grafana/grafana.log # less /var/log/ambari-metrics-grafana/grafana.out
.
.
Created 10-11-2017 04:06 AM
Thank You for reply,
In my case, I have not installed ufw and iptables.
In my /var/log/ambari-metrics-grafana/grafana.out content was,
Wed Oct 11 03:42:37 UTC 2017 Starting Ambari Metrics Grafana: ....
Already running.
Wed Oct 11 03:58:25 UTC 2017 Starting Ambari Metrics Grafana: ....
Already running.
I have no /var/log/ambari-metrics-grafana/grafana.out
And
my stdout is,
2017-10-11 03:58:25,540 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-10-11 03:58:25,626 - Stack Feature Version Info: stack_version=2.6, version=2.6.2.0-205, current_cluster_version=2.6.2.0-205 -> 2.6.2.0-205 2017-10-11 03:58:25,630 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf User Group mapping (user_group) is missing in the hostLevelParams 2017-10-11 03:58:25,631 - Group['hadoop'] {} 2017-10-11 03:58:25,631 - Group['users'] {} 2017-10-11 03:58:25,632 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-11 03:58:25,633 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-11 03:58:25,633 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2017-10-11 03:58:25,633 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-11 03:58:25,633 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-11 03:58:25,634 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-11 03:58:25,634 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-10-11 03:58:25,634 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-10-11 03:58:25,635 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2017-10-11 03:58:25,638 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if 2017-10-11 03:58:25,638 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'} 2017-10-11 03:58:25,644 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-10-11 03:58:25,644 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2017-10-11 03:58:25,649 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if 2017-10-11 03:58:25,649 - Group['hdfs'] {} 2017-10-11 03:58:25,649 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']} 2017-10-11 03:58:25,649 - FS Type: 2017-10-11 03:58:25,649 - Directory['/etc/hadoop'] {'mode': 0755} 2017-10-11 03:58:25,657 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2017-10-11 03:58:25,657 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2017-10-11 03:58:25,665 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'} 2017-10-11 03:58:25,668 - Skipping Execute[('setenforce', '0')] due to not_if 2017-10-11 03:58:25,668 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'} 2017-10-11 03:58:25,669 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'} 2017-10-11 03:58:25,669 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'} 2017-10-11 03:58:25,671 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'} 2017-10-11 03:58:25,672 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'} 2017-10-11 03:58:25,675 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644} 2017-10-11 03:58:25,680 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2017-10-11 03:58:25,681 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755} 2017-10-11 03:58:25,681 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'} 2017-10-11 03:58:25,683 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'} 2017-10-11 03:58:25,685 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755} 2017-10-11 03:58:25,804 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-10-11 03:58:25,806 - checked_call['hostid'] {} 2017-10-11 03:58:25,809 - checked_call returned (0, '0d0aa80a') 2017-10-11 03:58:25,810 - Directory['/etc/ambari-metrics-grafana/conf'] {'owner': 'ams', 'group': 'hadoop', 'create_parents': True, 'recursive_ownership': True, 'mode': 0755} 2017-10-11 03:58:25,811 - Directory['/var/log/ambari-metrics-grafana'] {'owner': 'ams', 'group': 'hadoop', 'create_parents': True, 'recursive_ownership': True, 'mode': 0755} 2017-10-11 03:58:25,812 - Directory['/var/lib/ambari-metrics-grafana'] {'owner': 'ams', 'group': 'hadoop', 'create_parents': True, 'recursive_ownership': True, 'mode': 0755} 2017-10-11 03:58:25,831 - Directory['/var/run/ambari-metrics-grafana'] {'owner': 'ams', 'group': 'hadoop', 'create_parents': True, 'recursive_ownership': True, 'mode': 0755} 2017-10-11 03:58:25,840 - File['/etc/ambari-metrics-grafana/conf/ams-grafana-env.sh'] {'content': InlineTemplate(...), 'owner': 'ams', 'group': 'hadoop'} 2017-10-11 03:58:25,843 - File['/etc/ambari-metrics-grafana/conf/ams-grafana.ini'] {'content': InlineTemplate(...), 'owner': 'ams', 'group': 'hadoop', 'mode': 0600} 2017-10-11 03:58:25,844 - Writing File['/etc/ambari-metrics-grafana/conf/ams-grafana.ini'] because contents don't match 2017-10-11 03:58:25,855 - Execute[('chown', '-R', u'ams', '/etc/ambari-metrics-grafana/conf')] {'sudo': True} 2017-10-11 03:58:25,859 - Execute[('chown', '-R', u'ams', u'/var/log/ambari-metrics-grafana')] {'sudo': True} 2017-10-11 03:58:25,863 - Execute[('chown', '-R', u'ams', u'/var/lib/ambari-metrics-grafana')] {'sudo': True} 2017-10-11 03:58:25,866 - Execute[('chown', '-R', u'ams', u'/var/run/ambari-metrics-grafana')] {'sudo': True} 2017-10-11 03:58:25,869 - Execute['/usr/sbin/ambari-metrics-grafana start'] {'not_if': "ambari-sudo.sh su ams -l -s /bin/bash -c 'test -f /var/run/ambari-metrics-grafana/grafana-server.pid && ps -p `cat /var/run/ambari-metrics-grafana/grafana-server.pid`'", 'user': 'ams'} 2017-10-11 03:58:25,890 - Grafana Server has started with pid: 31695 2017-10-11 03:58:25,890 - Connecting (GET) to dn5-231.mstorm.com:3000/api/user 2017-10-11 03:58:45,918 - Connection to Grafana failed. Next retry in 20 seconds. 2017-10-11 03:58:45,918 - Connecting (GET) to dn5-231.mstorm.com:3000/api/user 2017-10-11 03:59:05,938 - Connection to Grafana failed. Next retry in 20 seconds. 2017-10-11 03:59:05,938 - Connecting (GET) to dn5-231.mstorm.com:3000/api/user 2017-10-11 03:59:25,957 - Connection to Grafana failed. Next retry in 20 seconds. 2017-10-11 03:59:25,957 - Connecting (GET) to dn5-231.mstorm.com:3000/api/user 2017-10-11 03:59:45,978 - Connection to Grafana failed. Next retry in 20 seconds. 2017-10-11 03:59:45,978 - Connecting (GET) to dn5-231.mstorm.com:3000/api/user 2017-10-11 04:00:05,998 - Connection to Grafana failed. Next retry in 20 seconds. 2017-10-11 04:00:05,998 - Connecting (GET) to dn5-231.mstorm.com:3000/api/user 2017-10-11 04:00:26,018 - Connection to Grafana failed. Next retry in 20 seconds. 2017-10-11 04:00:26,018 - Connecting (GET) to dn5-231.mstorm.com:3000/api/user 2017-10-11 04:00:46,039 - Connection to Grafana failed. Next retry in 20 seconds. 2017-10-11 04:00:46,039 - Connecting (GET) to dn5-231.mstorm.com:3000/api/user 2017-10-11 04:01:06,054 - Connection to Grafana failed. Next retry in 20 seconds. 2017-10-11 04:01:06,055 - Connecting (GET) to dn5-231.mstorm.com:3000/api/user 2017-10-11 04:01:26,067 - Connection to Grafana failed. Next retry in 20 seconds. 2017-10-11 04:01:26,067 - Connecting (GET) to dn5-231.mstorm.com:3000/api/user 2017-10-11 04:01:46,075 - Connection to Grafana failed. Next retry in 20 seconds. 2017-10-11 04:01:46,075 - Connecting (GET) to dn5-231.mstorm.com:3000/api/user 2017-10-11 04:02:06,096 - Connection to Grafana failed. Next retry in 20 seconds. 2017-10-11 04:02:06,096 - Connecting (GET) to dn5-231.mstorm.com:3000/api/user 2017-10-11 04:02:26,114 - Connection to Grafana failed. Next retry in 20 seconds. 2017-10-11 04:02:26,114 - Connecting (GET) to dn5-231.mstorm.com:3000/api/user 2017-10-11 04:02:46,135 - Connection to Grafana failed. Next retry in 20 seconds. 2017-10-11 04:02:46,136 - Connecting (GET) to dn5-231.mstorm.com:3000/api/user 2017-10-11 04:03:06,156 - Connection to Grafana failed. Next retry in 20 seconds. 2017-10-11 04:03:06,156 - Connecting (GET) to dn5-231.mstorm.com:3000/api/user Command failed after 1 tries
Created 10-11-2017 04:09 AM
As we see message as
Wed Oct 11 03:42:37 UTC 2017 Starting Ambari Metrics Grafana: .... Already running.
.
So it may be either it is not listening on the 0.0.0.0 address (it might be listening to localhost) Please check the /etc/hosts entry in that case.
- Also please try to kill the Grafana Process and then verify if the port 3000 is released properly then try to start the grafana again to see if the error "connection refused" occurs again during restart.
Or the grafana port may be different....
Also did you see any message being logged inside the grafana.log or only the grafana.out is being populated.
Created 10-12-2017 09:02 AM
Hello,
I have checked above you mentioned. everything is correct in my case.but still I am getting same error.