Support Questions

Find answers, ask questions, and share your expertise
Announcements
Check out our newest addition to the community, the Cloudera Data Analytics (CDA) group hub.

Getting Error during HDF Nifi installation using ambari

New Contributor

I am trying to install NiFi as HDF component Over ambari. OS version is SUSE 12 SP4. Ambari HDp version given below.

  1. ambari-2.7.3.0-sles12.tar.gz
  2. HDF-3.3.1.0-sles12-rpm.tar.gz
  3. hdf-ambari-mpack-3.3.1.0-10.tar.gz
  4. HDP-UTILS-1.1.0.22-sles12.tar.gz

I have setup local repository for installation as no internet connectivity is there. Ambari install successfully But when we are trying to install HDF (Zookeeper, NiFi), then it is showing installation failed. Error Code is given below.


stderr:

Traceback (most recent call last):

File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/hook.py", line 37, in

BeforeInstallHook().execute()

File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute

method(env)

File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/hook.py", line 32, in hook

install_repos()

File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/repo_initialization.py", line 68, in install_repos

Script.repository_util.create_repo_files()

File "/usr/lib/ambari-agent/lib/resource_management/libraries/functions/repository_util.py", line 86, in create_repo_files

Repository(None, action="create")

File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__

self.env.run()

File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run

self.run_action(resource, action)

File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action

provider_action()

File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/repository.py", line 69, in action_create

self.update(repo_file_path)

File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/repository.py", line 112, in update

checked_call(self.update_cmd, sudo=True)

File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 72, in inner

result = function(command, **kwargs)

File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 102, in checked_call

tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy, returns=returns)

File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 150, in _call_wrapper

result = _call(command, **kwargs_copy)

File "/usr/lib/ambari-agent/lib/resource_management/core/shell.py", line 314, in _call

raise ExecutionFailed(err_msg, code, out, err)

resource_management.core.exceptions.ExecutionFailed: Execution of 'zypper clean --all' returned 1. Unexpected exception.

Url scheme is a required component

Please file a bug report about this.

See http://en.opensuse.org/Zypper/Troubleshooting for instructions.

stdout:

2019-06-06 07:41:34,723 - Stack Feature Version Info: Cluster Stack=3.3, Command Stack=None, Command Version=None -> 3.3

2019-06-06 07:41:34,726 - Group['hadoop'] {}

2019-06-06 07:41:34,727 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}

2019-06-06 07:41:34,768 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}

2019-06-06 07:41:34,769 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}

2019-06-06 07:41:34,770 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}

2019-06-06 07:41:34,771 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}

2019-06-06 07:41:34,775 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if

2019-06-06 07:41:34,791 - Repository['HDF-3.3-repo-51'] {'base_url': '', 'action': ['prepare'], 'components': [u'HDF', 'main'], 'repo_template': '[{
	{repo_id}}]\nname={
	{repo_id}}\n{% if mirror_list %}mirrorlist={
	{mirror_list}}{% else %}baseurl={
	{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdf-51', 'mirror_list': None}

2019-06-06 07:41:34,799 - Repository['HDP-UTILS-1.1.0.22-repo-51'] {'base_url': '', 'action': ['prepare'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{
	{repo_id}}]\nname={
	{repo_id}}\n{% if mirror_list %}mirrorlist={
	{mirror_list}}{% else %}baseurl={
	{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdf-51', 'mirror_list': None}

2019-06-06 07:41:34,801 - Repository[None] {'action': ['create']}

2019-06-06 07:41:34,802 - File['/tmp/tmpF8164w'] {'content': '[HDF-3.3-repo-51]\nname=HDF-3.3-repo-51\nbaseurl=\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-51]\nname=HDP-UTILS-1.1.0.22-repo-51\nbaseurl=\n\npath=/\nenabled=1\ngpgcheck=0'}

2019-06-06 07:41:34,802 - Writing File['/tmp/tmpF8164w'] because contents don't match

2019-06-06 07:41:34,803 - Rewriting /etc/zypp/repos.d/ambari-hdf-51.repo since it has changed.

2019-06-06 07:41:34,803 - File['/etc/zypp/repos.d/ambari-hdf-51.repo'] {'content': StaticFile('/tmp/tmpF8164w')}

2019-06-06 07:41:34,803 - Writing File['/etc/zypp/repos.d/ambari-hdf-51.repo'] because it doesn't exist

2019-06-06 07:41:34,804 - Flushing package manager cache since repo file content is about to change

2019-06-06 07:41:34,804 - checked_call[['zypper', 'clean', '--all']] {'sudo': True}

2019-06-06 07:41:34,827 - File['/etc/zypp/repos.d/ambari-hdf-51.repo'] {'action': ['delete']}

2019-06-06 07:41:34,827 - Deleting File['/etc/zypp/repos.d/ambari-hdf-51.repo']

2019-06-06 07:41:34,831 - Skipping stack-select on SMARTSENSE because it does not exist in the stack-select package structure.

Command failed after 1 tries


2 REPLIES 2

Mentor

@Manoj Kumar

Confirm that you can browse to the newly created local repositories, where <web.server>, <web.server.directory>, <OS>, <version>, and <latest.version> represent the name, home directory, operating system type, version, and most recent release version, respectively:

Can you share your hdf.repo,ambari.repo?

Ambari Base URL

http://<web.server>/Ambari-2.7.3.0/<OS>;

HDF Base URL

http://<web.server>/hdf/HDF/<OS>/3.x/updates/<latest.version>;


HDP-UTILS Base URL

http://<web.server>/hdp/HDP-UTILS-<version>/repos/<OS>;

[Important]

Be sure to record these Base URLs. You will need them when installing Ambari and the cluster.

Steps to install the HDF Management Pack on an HDP Cluster

 stop the ambari server  

# ambari-server stop
 
Back up your Ambari resources folder

# cp -r /var/lib/ambari-server/resources /var/lib/ambari-server/resources.backup

Release notes for the management pack


https://docs.hortonworks.com/HDPDocuments/HDF3/HDF-3.1.2/bk_release-notes/content/ch_hdf_relnotes.ht...
  
On the Ambari host

cd /tmp
wget -nv http://public-repo-1.hortonworks.com/HDF/sles12/3.x/updates/3.1.2.0/tars/hdf_ambari_mp/hdf-ambari-mp...

Install the mpack

# ambari-server install-mpack --mpack=/tmp/hdf-ambari-mpack-3.1.2.0-7.tar.gz --purge --verbose

Correct output

# ambari-server install-mpack --mpack=/tmp/hdf-ambari-mpack-3.1.2.0-7.tar.gz
The following will be the output:
Using python /usr/bin/python
Installing management pack
Ambari Server 'install-mpack' completed successfully.
 
Restart Ambari server
# ambari-server start
 
On the Ambari UI

Navigate to add service and proceed with the desired options 🙂

Super Mentor

@Manoj Kumar

Basically we see the following failure:

resource_management.core.exceptions.ExecutionFailed: Execution of 'zypper clean --all' returned 1. Unexpected exception.
Url scheme is a required component

So can you please check if by any chance you see that baseurl is Empty in the following repo file on the failing host?

# grep 'baseurl'  /etc/zypp/repos.d/ambari-hdf-51.repo
# grep 'baseurl'  /etc/zypp/repos.d/ambari*



If you see that baseurl is showing incorrect then you should check the repository if that is showing correctly?

Ambari UI --> Stacks and Versions --> Versions (Tab) --> Manage Versions -->  check the URLs entered for "HDF-3.3" 


Also pleas verify if those are accessible? May be using some CURL command try to access those repo URLs from the ambari-server host as well as from the host where the installation is faling?

# curl -iv http://xxxxxxx/repodata/repomd.xml

.


May be you can check "/var/log/ambari-server/ambari-server.log" as well to see if it is complaining anything about the HDF repo registration?


Take a Tour of the Community
Don't have an account?
Your experience may be limited. Sign in to explore more.