Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Ambari fails to install Kibana

avatar
New Contributor

I have been following the tutorial:

https://cwiki.apache.org/confluence/display/METRON/Metron+with+HDP+2.5+bare-metal+install

I have attempted to run this on a single node. It has the minimum requirements as specified in the tutorial.

I have finally made it to Install, Test and Start when I get the following errors

stderr: /var/lib/ambari-agent/data/errors-222.txt
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/KIBANA/4.5.1/package/scripts/kibana_master.py", line 155, in <module>
    Kibana().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/KIBANA/4.5.1/package/scripts/kibana_master.py", line 38, in install
    import params
  File "/var/lib/ambari-agent/cache/common-services/KIBANA/4.5.1/package/scripts/params.py", line 43, in <module>
    es_port = parsed.netloc.split(':')[1]
IndexError: list index out of range
stdout: /var/lib/ambari-agent/data/output-222.txt
2017-02-09 14:47:28,538 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-02-09 14:47:28,541 - Group['metron'] {}
2017-02-09 14:47:28,543 - Group['livy'] {}
2017-02-09 14:47:28,544 - Group['spark'] {}
2017-02-09 14:47:28,544 - Group['zeppelin'] {}
2017-02-09 14:47:28,545 - Group['hadoop'] {}
2017-02-09 14:47:28,545 - Group['kibana'] {}
2017-02-09 14:47:28,545 - Group['users'] {}
2017-02-09 14:47:28,546 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-02-09 14:47:28,548 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-02-09 14:47:28,549 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-02-09 14:47:28,550 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-02-09 14:47:28,551 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-02-09 14:47:28,553 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-02-09 14:47:28,554 - User['metron'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-02-09 14:47:28,555 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-02-09 14:47:28,556 - User['elasticsearch'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-02-09 14:47:28,558 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-02-09 14:47:28,559 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-02-09 14:47:28,560 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-02-09 14:47:28,561 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-02-09 14:47:28,563 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-02-09 14:47:28,565 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-02-09 14:47:28,566 - User['kibana'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-02-09 14:47:28,567 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-02-09 14:47:28,568 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-02-09 14:47:28,570 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-02-09 14:47:28,571 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-02-09 14:47:28,575 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-02-09 14:47:28,586 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-02-09 14:47:28,587 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-02-09 14:47:28,589 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-02-09 14:47:28,591 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-02-09 14:47:28,607 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-02-09 14:47:28,608 - Group['hdfs'] {}
2017-02-09 14:47:28,609 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2017-02-09 14:47:28,611 - FS Type: 
2017-02-09 14:47:28,611 - Directory['/etc/hadoop'] {'mode': 0755}
2017-02-09 14:47:28,641 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-02-09 14:47:28,642 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-02-09 14:47:28,683 - Initializing 2 repositories
2017-02-09 14:47:28,684 - Repository['HDP-2.5'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0', 'action': ['create'], 'components': ['HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-02-09 14:47:28,702 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-02-09 14:47:28,703 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-02-09 14:47:28,710 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-02-09 14:47:28,711 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-02-09 14:47:28,885 - Skipping installation of existing package unzip
2017-02-09 14:47:28,885 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-02-09 14:47:28,901 - Skipping installation of existing package curl
2017-02-09 14:47:28,901 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-02-09 14:47:28,915 - Skipping installation of existing package hdp-select

Command failed after 1 tries

The yum repo is working fine. I am when I run yum install hdp-select it has already been installed. Any ideas?

1 ACCEPTED SOLUTION

avatar
Master Mentor

@Christopher Huynh

Looks like you have not defined the "Port" for the "kibana_es_url" inside your "kibana-env" That seems to be causing this issue.

Please define the "kibana_es_url" in "scheme://host:port" format. (here scheme means http or https)

Example:

https://something.example.com:securePort     
OR
http://something.example.com:port      

.

View solution in original post

2 REPLIES 2

avatar
Master Mentor

@Christopher Huynh

Looks like you have not defined the "Port" for the "kibana_es_url" inside your "kibana-env" That seems to be causing this issue.

Please define the "kibana_es_url" in "scheme://host:port" format. (here scheme means http or https)

Example:

https://something.example.com:securePort     
OR
http://something.example.com:port      

.

avatar
New Contributor

Perfect, this resolved my problem 🙂