Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

HDP 3.0 with local repository - failing to deploy

SOLVED Go to solution
Highlighted

HDP 3.0 with local repository - failing to deploy

New Contributor

Environment:

1 x rhel ambari host (ambari server is installed on this machine)

4 x rhel hosts for cluster (node1-node4)

In Ambari GUI the deployment fails (image attached)

When I click on issues encountered link the following error lines show up:

stderr: 
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/hook.py", line 37, in <module>
    BeforeInstallHook().execute()
  File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/hook.py", line 33, in hook
    install_packages()
  File "/var/lib/ambari-agent/cache/stack-hooks/before-INSTALL/scripts/shared_initialization.py", line 37, in install_packages
    retry_count=params.agent_stack_retry_count)
  File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
    self.env.run()
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/ambari-agent/lib/resource_management/core/providers/packaging.py", line 30, in action_install
    self._pkg_manager.install_package(package_name, self.__create_context())
  File "/usr/lib/ambari-agent/lib/ambari_commons/repo_manager/yum_manager.py", line 219, in install_package
    shell.repository_manager_executor(cmd, self.properties, context)
  File "/usr/lib/ambari-agent/lib/ambari_commons/shell.py", line 753, in repository_manager_executor
    raise RuntimeError(message)
RuntimeError: Failed to execute command '/usr/bin/yum -y install hdp-select', exited with code '1', message: 'Repository InstallMedia is listed more than once in the configuration

 One of the configured repositories failed (Unknown),
 and yum doesn't have enough cached data to continue. At this point the only
 safe thing yum can do is fail. There are a few ways to work "fix" this:

     1. Contact the upstream for the repository and get them to fix the problem.

    2. Reconfigure the baseurl/etc. for the repository, to point to a working

       upstream. This is most often useful if you are using a newer distribution release than is supported by the repository (and the packages for the previous distribution release still work).

     3. Run the command with the repository temporarily disabled

            yum --disablerepo=<repoid> ...

     4. Disable the repository permanently, so yum won't use it by default. Yum will then just ignore the repository until you permanently enable it again or use --enablerepo for temporary usage:






            yum-config-manager --disable <repoid>


        or


            subscription-manager repos --disable=<repoid>






     5. Configure the failing repository to be skipped, if it is unavailable.


        Note that yum will try to contact the repo. when it runs most commands,


        so will have to try and fail each time (and thus. yum will be be much


        slower). If it is a very temporary problem though, this is often a nice


        compromise:






            yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true






Cannot find a valid baseurl for repo: HDP-3.1-repo-1
'
Command aborted. Reason: 'Server considered task failed and automatically aborted it'
 stdout:
2018-12-21 11:47:09,204 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=None -> 3.1
2018-12-21 11:47:09,213 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2018-12-21 11:47:09,216 - Group['hdfs'] {}
2018-12-21 11:47:09,218 - Group['hadoop'] {}
2018-12-21 11:47:09,218 - Group['users'] {}
2018-12-21 11:47:09,219 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-12-21 11:47:09,221 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-12-21 11:47:09,222 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-12-21 11:47:09,223 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2018-12-21 11:47:09,224 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2018-12-21 11:47:09,226 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-12-21 11:47:09,227 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-12-21 11:47:09,228 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-12-21 11:47:09,230 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-12-21 11:47:09,235 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-12-21 11:47:09,236 - Group['hdfs'] {}
2018-12-21 11:47:09,236 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
2018-12-21 11:47:09,237 - FS Type: HDFS
2018-12-21 11:47:09,237 - Directory['/etc/hadoop'] {'mode': 0755}
2018-12-21 11:47:09,237 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-12-21 11:47:09,238 - Changing owner for /var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir from 0 to hdfs
2018-12-21 11:47:09,258 - Repository['HDP-3.1-repo-2'] {'base_url': 'http://ambari.hdp.local/hdp/HDP/centos7/3.1.0.0-78/', 'action': ['prepare'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-2', 'mirror_list': None}
2018-12-21 11:47:09,268 - Repository['HDP-UTILS-1.1.0.22-repo-2'] {'base_url': 'http://ambari.hdp.local/hdp/HDP-UTILS/centos7/1.1.0.22', 'action': ['prepare'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-2', 'mirror_list': None}
2018-12-21 11:47:09,272 - Repository[None] {'action': ['create']}
2018-12-21 11:47:09,273 - File['/tmp/tmpYbCIof'] {'content': '[HDP-3.1-repo-2]\nname=HDP-3.1-repo-2\nbaseurl=http://ambari.hdp.local/hdp/HDP/centos7/3.1.0.0-78/\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-2]\nname=HDP-UTILS-1.1.0.22-repo-2\nbaseurl=http://ambari.hdp.local/hdp/HDP-UTILS/centos7/1.1.0.22\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-12-21 11:47:09,274 - Writing File['/tmp/tmpYbCIof'] because contents don't match
2018-12-21 11:47:09,274 - Rewriting /etc/yum.repos.d/ambari-hdp-2.repo since it has changed.
2018-12-21 11:47:09,274 - File['/etc/yum.repos.d/ambari-hdp-2.repo'] {'content': StaticFile('/tmp/tmpYbCIof')}
2018-12-21 11:47:09,276 - Writing File['/etc/yum.repos.d/ambari-hdp-2.repo'] because it doesn't exist
2018-12-21 11:47:09,276 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-12-21 11:47:09,626 - Skipping installation of existing package unzip
2018-12-21 11:47:09,627 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-12-21 11:47:09,844 - Skipping installation of existing package curl
2018-12-21 11:47:09,844 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-12-21 11:47:10,062 - Installing package hdp-select ('/usr/bin/yum -y install hdp-select')
2018-12-21 11:47:10,404 - Skipping stack-select on SMARTSENSE because it does not exist in the stack-select package structure.
Command aborted. Reason: 'Server considered task failed and automatically aborted it'


Command failed after 1 tries


on cluster hosts ambari-hdp-1.repo is being created automatically with empty base urls:

[root@node1 ~]# cat /etc/yum.repos.d/ambari-hdp-1.repo
[HDP-3.1-repo-1]
name=HDP-3.1-repo-1
baseurl=


path=/
enabled=1
gpgcheck=0
[HDP-UTILS-1.1.0.22-repo-1]
name=HDP-UTILS-1.1.0.22-repo-1
baseurl=


path=/
enabled=1
gpgcheck=0[root@node1 ~]# 
[root@node1 ~]# 
[root@node1 ~]# 

96441-deployment-fails.jpg

is this normal?

I then edited /etc/yum.repos.d/ambari-hdp-1.repo on all cluster hosts and updated baseurl for HDP and HDP-UTILS and started deployment again. This time it fails with a different error:

stderr: 


Command aborted. Reason: 'Server considered task failed and automatically aborted it'
 stdout:
2018-12-21 12:11:42,517 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=None -> 3.1
2018-12-21 12:11:42,527 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2018-12-21 12:11:42,529 - Group['hdfs'] {}
2018-12-21 12:11:42,531 - Group['hadoop'] {}
2018-12-21 12:11:42,531 - Group['users'] {}
2018-12-21 12:11:42,532 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-12-21 12:11:42,533 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-12-21 12:11:42,534 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-12-21 12:11:42,535 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2018-12-21 12:11:42,536 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2018-12-21 12:11:42,537 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-12-21 12:11:42,539 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2018-12-21 12:11:42,539 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-12-21 12:11:42,542 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-12-21 12:11:42,547 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-12-21 12:11:42,547 - Group['hdfs'] {}
2018-12-21 12:11:42,548 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
2018-12-21 12:11:42,548 - FS Type: HDFS
2018-12-21 12:11:42,549 - Directory['/etc/hadoop'] {'mode': 0755}
2018-12-21 12:11:42,549 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-12-21 12:11:42,570 - Repository['HDP-3.1-repo-2'] {'base_url': 'http://ambari.hdp.local/hdp/HDP/centos7/3.1.0.0-78/', 'action': ['prepare'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-2', 'mirror_list': None}
2018-12-21 12:11:42,580 - Repository['HDP-UTILS-1.1.0.22-repo-2'] {'base_url': 'http://ambari.hdp.local/hdp/HDP-UTILS/centos7/1.1.0.22', 'action': ['prepare'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-2', 'mirror_list': None}
2018-12-21 12:11:42,583 - Repository[None] {'action': ['create']}
2018-12-21 12:11:42,584 - File['/tmp/tmpUZMPwd'] {'content': '[HDP-3.1-repo-2]\nname=HDP-3.1-repo-2\nbaseurl=http://ambari.hdp.local/hdp/HDP/centos7/3.1.0.0-78/\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-2]\nname=HDP-UTILS-1.1.0.22-repo-2\nbaseurl=http://ambari.hdp.local/hdp/HDP-UTILS/centos7/1.1.0.22\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-12-21 12:11:42,585 - Writing File['/tmp/tmpUZMPwd'] because contents don't match
2018-12-21 12:11:42,585 - File['/tmp/tmpiZYY4_'] {'content': StaticFile('/etc/yum.repos.d/ambari-hdp-2.repo')}
2018-12-21 12:11:42,587 - Writing File['/tmp/tmpiZYY4_'] because contents don't match
2018-12-21 12:11:42,587 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-12-21 12:11:42,935 - Skipping installation of existing package unzip
2018-12-21 12:11:42,935 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-12-21 12:11:43,151 - Skipping installation of existing package curl
2018-12-21 12:11:43,151 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-12-21 12:11:43,370 - Installing package hdp-select ('/usr/bin/yum -y install hdp-select')
2018-12-21 12:11:54,503 - Skipping stack-select on SMARTSENSE because it does not exist in the stack-select package structure.
Command aborted. Reason: 'Server considered task failed and automatically aborted it'


Command failed after 1 tries


any ideas what might be going on here?

I'm faily new to Hortonworks - actually big data for that matter. So just trying to find my way into self learning and self growth.

Thanks

1 ACCEPTED SOLUTION

Accepted Solutions

Re: HDP 3.0 with local repository - failing to deploy

Expert Contributor

Hi @IMRAN KHAN,

Could you please check the repos are correct in host where the installation is failing

# grep 'baseurl' /etc/yum.repos.d/* | grep -i HDP

Try cleaning the yum cache by running the command.

# yum clean all

Please check if in case of any multiple "ambari-hdp-<repoid>.repo" files present inside the "/etc/yum.repos.d/" . If so, then move the unwanted files from there to back up folder.

Please try the below commands from the host where it is failing to install "hdp-select" package

# yum install hdp-select -y

Hope this helps!

4 REPLIES 4

Re: HDP 3.0 with local repository - failing to deploy

Expert Contributor

Hi @IMRAN KHAN,

Could you please check the repos are correct in host where the installation is failing

# grep 'baseurl' /etc/yum.repos.d/* | grep -i HDP

Try cleaning the yum cache by running the command.

# yum clean all

Please check if in case of any multiple "ambari-hdp-<repoid>.repo" files present inside the "/etc/yum.repos.d/" . If so, then move the unwanted files from there to back up folder.

Please try the below commands from the host where it is failing to install "hdp-select" package

# yum install hdp-select -y

Hope this helps!

Re: HDP 3.0 with local repository - failing to deploy

New Contributor

Hello @Sampath Kumar

Thanks for your leads. It solved the repository problem, but then I encountered several more issues.

However, I have been able to resolve those and now my cluster is up and running with 4 x green hosts. The only warning I am having now is of smart sense gateway. Since my hosts are not on internet therefore I am not worried about smart sense gateway communication warning.

Thanks for your help and support.

Re: HDP 3.0 with local repository - failing to deploy

New Contributor

while deploy a cluster in suse 12sp 3, of HDP3.1 , i met the same issue.

can this apply ?

and seems no hdp-select for suse 12sp3

please help ! Thanks in advance!

Re: HDP 3.0 with local repository - failing to deploy

New Contributor

it seems the ambari agent can not get the right repo parameter to generate the .repo file.

HDP-3.1-repo

HDP-UTILS01.1.0.22-repo

but from zypper install , we can normally access the local repo.

could anybody give more info ?