Member since
04-29-2018
4
Posts
0
Kudos Received
0
Solutions
05-30-2018
02:45 AM
getting this below error.. stderr: /var/lib/ambari-agent/data/errors-1302.txt
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/hook.py", line 37, in <module>
BeforeInstallHook().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 367, in execute
method(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/hook.py", line 34, in hook
install_packages()
File "/var/lib/ambari-agent/cache/stacks/HDP/2.0.6/hooks/before-INSTALL/scripts/shared_initialization.py", line 37, in install_packages
retry_count=params.agent_stack_retry_count)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 53, in action_install
self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 251, in install_package
self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 251, in checked_call_with_retries
return self._call_with_retries(cmd, is_checked=True, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 268, in _call_with_retries
code, out = func(cmd, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 72, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 102, in checked_call
tries=tries, try_sleep=try_sleep, timeout_kill_strategy=timeout_kill_strategy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 150, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 303, in _call
raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install hdp-select' returned 1. One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Run the command with the repository temporarily disabled
yum --disablerepo=<repoid> ...
4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable <repoid>
or
subscription-manager repos --disable=<repoid>
5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot find a valid baseurl for repo: base/7/x86_64
Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=stock error was
14: curl#7 - "Failed to connect to 2001:1b48:203::4:10: Network is unreachable"
stdout: /var/lib/ambari-agent/data/output-1302.txt
2018-05-30 07:56:17,367 - Stack Feature Version Info: Cluster Stack=2.5, Command Stack=None, Command Version=None -> 2.5
2018-05-30 07:56:17,373 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2018-05-30 07:56:17,374 - Group['livy'] {}
2018-05-30 07:56:17,395 - Group['spark'] {}
2018-05-30 07:56:17,396 - Group['hdfs'] {}
2018-05-30 07:56:17,396 - Group['hadoop'] {}
2018-05-30 07:56:17,396 - Group['users'] {}
2018-05-30 07:56:17,397 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-05-30 07:56:17,398 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-05-30 07:56:17,399 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-05-30 07:56:17,400 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-05-30 07:56:17,401 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-05-30 07:56:17,402 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-05-30 07:56:17,403 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-05-30 07:56:17,404 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-05-30 07:56:17,405 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-05-30 07:56:17,406 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-05-30 07:56:17,407 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-05-30 07:56:17,408 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-05-30 07:56:17,409 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-05-30 07:56:17,410 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2018-05-30 07:56:17,411 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-05-30 07:56:17,412 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-05-30 07:56:17,413 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-05-30 07:56:17,415 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-05-30 07:56:17,416 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-05-30 07:56:17,416 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-05-30 07:56:17,445 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-05-30 07:56:17,451 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-05-30 07:56:17,452 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2018-05-30 07:56:17,452 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-05-30 07:56:17,454 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-05-30 07:56:17,455 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2018-05-30 07:56:17,462 - call returned (0, '1048')
2018-05-30 07:56:17,463 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1048'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2018-05-30 07:56:17,468 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1048'] due to not_if
2018-05-30 07:56:17,468 - Group['hdfs'] {}
2018-05-30 07:56:17,469 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}
2018-05-30 07:56:17,469 - FS Type:
2018-05-30 07:56:17,470 - Directory['/etc/hadoop'] {'mode': 0755}
2018-05-30 07:56:17,470 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-05-30 07:56:17,486 - Repository['HDP-2.5-repo-4'] {'append_to_file': False, 'base_url': 'http://192.168.184.64/HDP/centos7/2.x/updates/2.6.4.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-4', 'mirror_list': None}
2018-05-30 07:56:17,525 - File['/etc/yum.repos.d/ambari-hdp-4.repo'] {'content': '[HDP-2.5-repo-4]\nname=HDP-2.5-repo-4\nbaseurl=http://192.168.184.64/HDP/centos7/2.x/updates/2.6.4.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-05-30 07:56:17,525 - Writing File['/etc/yum.repos.d/ambari-hdp-4.repo'] because contents don't match
2018-05-30 07:56:17,526 - Repository['HDP-UTILS-1.1.0.21-repo-4'] {'append_to_file': True, 'base_url': 'http://192.168.184.64/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-4', 'mirror_list': None}
2018-05-30 07:56:17,529 - File['/etc/yum.repos.d/ambari-hdp-4.repo'] {'content': '[HDP-2.5-repo-4]\nname=HDP-2.5-repo-4\nbaseurl=http://192.168.184.64/HDP/centos7/2.x/updates/2.6.4.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.21-repo-4]\nname=HDP-UTILS-1.1.0.21-repo-4\nbaseurl=http://192.168.184.64/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-05-30 07:56:17,529 - Writing File['/etc/yum.repos.d/ambari-hdp-4.repo'] because contents don't match
2018-05-30 07:56:17,530 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-05-30 07:56:18,882 - Skipping installation of existing package unzip
2018-05-30 07:56:18,882 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-05-30 07:56:18,906 - Skipping installation of existing package curl
2018-05-30 07:56:18,906 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-05-30 07:56:18,931 - Installing package hdp-select ('/usr/bin/yum -d 0 -e 0 -y install hdp-select')
2018-05-30 07:56:47,508 - Execution of '/usr/bin/yum -d 0 -e 0 -y install hdp-select' returned 1. One of the configured repositories failed (Unknown),
and yum doesn't have enough cached data to continue. At this point the only
safe thing yum can do is fail. There are a few ways to work "fix" this:
1. Contact the upstream for the repository and get them to fix the problem.
2. Reconfigure the baseurl/etc. for the repository, to point to a working
upstream. This is most often useful if you are using a newer
distribution release than is supported by the repository (and the
packages for the previous distribution release still work).
3. Run the command with the repository temporarily disabled
yum --disablerepo=<repoid> ...
4. Disable the repository permanently, so yum won't use it by default. Yum
will then just ignore the repository until you permanently enable it
again or use --enablerepo for temporary usage:
yum-config-manager --disable <repoid>
or
subscription-manager repos --disable=<repoid>
5. Configure the failing repository to be skipped, if it is unavailable.
Note that yum will try to contact the repo. when it runs most commands,
so will have to try and fail each time (and thus. yum will be be much
slower). If it is a very temporary problem though, this is often a nice
compromise:
yum-config-manager --save --setopt=<repoid>.skip_if_unavailable=true
Cannot find a valid baseurl for repo: base/7/x86_64
Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=stock error was
14: curl#7 - "Failed to connect to 2604:1580:fe02:2::10: Network is unreachable"
2018-05-30 07:56:47,508 - Failed to install package hdp-select. Executing '/usr/bin/yum clean metadata'
2018-05-30 07:56:47,715 - Retrying to install package hdp-select after 30 seconds
2018-05-30 07:57:46,082 - Skipping stack-select on SMARTSENSE because it does not exist in the stack-select package structure.
Command failed after 1 tries
OK
Licensed under the Apache License, Version 2.0.
See third-party tools/resources that Ambari uses and their respective authors
... View more
- Tags:
- .NET
05-02-2018
09:20 AM
@Geoffrey Thank you. How is Zookeeper and YARN different in HA, can you please let me know, how exactly it works..
... View more
05-01-2018
06:12 PM
@ Geoffrey Shelton Thanks for the quick response, planning to go HA for these components Hive, oozie, hbase as of now. Can you please suggest me is this the right way to go with. I already setup Ambari server setup with 1 Master node and 2 Data nodes, it's working fine. First time will be going with HA. Is there step-wise documents will be great help.
... View more
04-29-2018
12:09 AM
first time installing Hortwonworks cluster with 2 master node and 3 data node with High Availability, Need help on this.. do we have any step-wise document to follow..
... View more
Labels: