Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Issues with Ambari Cluster Install Wizard > Install, Start and Test > on CentOS 7.3

Highlighted

Issues with Ambari Cluster Install Wizard > Install, Start and Test > on CentOS 7.3

Explorer

After running the HortonWorks HDP 2.5 Hadoop Single Node Cluster installation , the Ambari Cluster Install Wizard > Install, Start and Test display

The following Failure error messages are encountered at "Slider Client Install" :

2017-05-24 18:33:41,039 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2017-05-24 18:33:41,041 - Group['livy'] {} 2017-05-24 18:33:41,043 - Group['spark'] {} 2017-05-24 18:33:41,043 - Group['zeppelin'] {} 2017-05-24 18:33:41,043 - Group['hadoop'] {} 2017-05-24 18:33:41,043 - Group['users'] {} 2017-05-24 18:33:41,044 - Group['knox'] {} 2017-05-24 18:33:41,044 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-24 18:33:41,045 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-24 18:33:41,046 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-24 18:33:41,048 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-24 18:33:41,049 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2017-05-24 18:33:41,050 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-24 18:33:41,051 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-24 18:33:41,052 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-24 18:33:41,053 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-24 18:33:41,054 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-24 18:33:41,055 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-24 18:33:41,056 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-24 18:33:41,057 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2017-05-24 18:33:41,058 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2017-05-24 18:33:41,059 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-24 18:33:41,060 - User['logsearch'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-24 18:33:41,061 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-24 18:33:41,062 - User['mahout'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-24 18:33:41,063 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']} 2017-05-24 18:33:41,064 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-24 18:33:41,065 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-24 18:33:41,066 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-24 18:33:41,067 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-24 18:33:41,068 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-24 18:33:41,069 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']} 2017-05-24 18:33:41,070 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-05-24 18:33:41,073 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'} 2017-05-24 18:33:41,078 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if 2017-05-24 18:33:41,079 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'} 2017-05-24 18:33:41,080 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555} 2017-05-24 18:33:41,082 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'} 2017-05-24 18:33:41,089 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if 2017-05-24 18:33:41,090 - Group['hdfs'] {} 2017-05-24 18:33:41,090 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']} 2017-05-24 18:33:41,091 - FS Type: 2017-05-24 18:33:41,091 - Directory['/etc/hadoop'] {'mode': 0755} 2017-05-24 18:33:41,114 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'} 2017-05-24 18:33:41,115 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777} 2017-05-24 18:33:41,134 - Initializing 2 repositories 2017-05-24 18:33:41,135 - Repository['HDP-2.5'] {'base_url': 'http://10.95.70.117/HDP/centos7/', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None} 2017-05-24 18:33:41,145 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://10.95.70.117/HDP/centos7/\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-05-24 18:33:41,146 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://10.95.70.117/HDP-UTILS-1.1.0.21/repos/centos7/', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None} 2017-05-24 18:33:41,150 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://10.95.70.117/HDP-UTILS-1.1.0.21/repos/centos7/\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-05-24 18:33:41,150 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-05-24 18:33:41,272 - Skipping installation of existing package unzip 2017-05-24 18:33:41,272 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-05-24 18:33:41,300 - Skipping installation of existing package curl 2017-05-24 18:33:41,301 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-05-24 18:33:41,328 - Skipping installation of existing package hdp-select 2017-05-24 18:33:41,722 - Package['slider_2_5_0_0_1245'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2017-05-24 18:33:41,847 - Installing package slider_2_5_0_0_1245 ('/usr/bin/yum -d 0 -e 0 -y install slider_2_5_0_0_1245')

Command failed after 1 tries

Please kindly help me to figure out what went wrong and how to fix this problem.

Thanks !

13 REPLIES 13
Highlighted

Re: Issues with Ambari Cluster Install Wizard > Install, Start and Test > on CentOS 7.3

Super Mentor

@john li

Looks like you do not have internet access from that host. Please check if you are able to access the repo?

6.x line of RHEL/CentOS/Oracle Linux
wget -nv http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.0.0/hdp.repo


7.x line of RHEL/CentOS/Oracle Linux
wget -nv http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.0.0/hdp.repo

.

Also please check if you are able to manually install the mentioned package? (This is just to isolate the issue if it is related to ambari OR with the repo unavailability).

/usr/bin/yum -d 0 -e 0 -y install slider_2_5_0_0_1245

.

Highlighted

Re: Issues with Ambari Cluster Install Wizard > Install, Start and Test > on CentOS 7.3

Expert Contributor

==Error log==

'base_url': 'http://10.95.70.117/HDP/centos7/', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None} 2017-05-24 18:33:41,145 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://10.95.70.117/HDP/centos7/\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-05-24 18:33:41,146 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://10.95.70.117/HDP-UTILS-1.1.0.21/repos/centos7/', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None} 2017-05-24 18:33:41,150 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://10.95.70.117/HDP-UTILS-1.1.0.21/repos/centos7/\n\npath=/\nenabled=1\ngpgcheck=0'} 2017-05-24 18:33:41,150 - Package['unzip'] {'retry_on_repo_unavailability': False,

----------------------------

As per the log, it shows configured local repo and try to install.. please configure repo files for Ambari/OS in /etc/yum.repo.d/....also check & setup in ambari url --> versions -> Base URL for HDP & HDP-Utils repos.

output :

yum repolist

Highlighted

Re: Issues with Ambari Cluster Install Wizard > Install, Start and Test > on CentOS 7.3

Explorer

Thanks for Palanivelrajan Chellakutty suggestion !

After typing the following command at the terminal :

yum repolist

[root@host01 data]# yum repolist Loaded plugins: fastestmirror, langpacks Repository HDP-UTILS-1.1.0.21 is listed more than once in the configuration Loading mirror speeds from cached hostfile * base: centos.01link.hk * extras: centos.01link.hk * updates: centos.01link.hk repo id repo name status !Ambari Ambari 12 !HDP-2.5 HDP-2.5 200 !HDP-2.5.0.0 HDP Version - HDP-2.5.0.0 200 !HDP-UTILS-1.1.0.21 HDP-UTILS-1.1.0.21 64 !base/7/x86_64 CentOS-7 - Base 9,363 !extras/7/x86_64 CentOS-7 - Extras 337 !updates/7/x86_64 CentOS-7 - Updates 1,646 repolist: 11,822 [root@host01 data]#

I found out that "Repository HDP-UTILS-1.1.0.21 is listed more than once in the configuration"

is that cause the problem ?

How can I correct this ?

Highlighted

Re: Issues with Ambari Cluster Install Wizard > Install, Start and Test > on CentOS 7.3

Super Mentor

@john li

Ideally the "slider" package does not come from HTP-UTILs repo. But yes you should not have more than one conflicting repo configured on the host.

So please check the following directory and then remove the unwanted repo files.

# ls -lart /etc/yum.repos.d/

.

Also please double check if the repo files are correct as mentioned in then:

http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.0/bk_command-line-installation/content/config-...

.

Highlighted

Re: Issues with Ambari Cluster Install Wizard > Install, Start and Test > on CentOS 7.3

Explorer

Thanks for Jay SenSharma advice !

After typing the following command at the terminal :

/usr/bin/yum -d 0-e 0-y install slider_2_5_0_0_1245

To my surprise , it says

Package slider_2_5_0_0_1245-0.91.0.2.5.0.0-1245.el6.noarch already installed and latest version

So what really cause the error ?

Highlighted

Re: Issues with Ambari Cluster Install Wizard > Install, Start and Test > on CentOS 7.3

Super Mentor

@john li

Please check the IP Address there: "http://10.95.70.117"

Is that your Custom Offline Repo? Is that accessible?

And do you have multiple copies of the HDP and HDP-UTIL repos inside the "/etc/yum.repos.d/" (if yes then you should fix that by removing unwanted repo files)

File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': 
'[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://10.95.70.117/HDP-UTILS-1.1.0.21/repos

.

Highlighted

Re: Issues with Ambari Cluster Install Wizard > Install, Start and Test > on CentOS 7.3

Explorer

After typing the following command at the terminal :

# ls -lart /etc/yum.repos.d/

I discovered that the file hdp.repo is created by myself and contain the correct info.

However, I did not generate the extra files "HDP.repo" and "HDP-UTILS.repo"

[root@host01 data]# ls -lart /etc/yum.repos.d/ total 64 -rw-r--r--. 1 root root 2893 Nov 30 02:12 CentOS-Vault.repo -rw-r--r--. 1 root root 1331 Nov 30 02:12 CentOS-Sources.repo -rw-r--r--. 1 root root 630 Nov 30 02:12 CentOS-Media.repo -rw-r--r--. 1 root root 314 Nov 30 02:12 CentOS-fasttrack.repo -rw-r--r--. 1 root root 649 Nov 30 02:12 CentOS-Debuginfo.repo -rw-r--r--. 1 root root 1309 Nov 30 02:12 CentOS-CR.repo -rw-r--r--. 1 root root 1664 Nov 30 02:12 CentOS-Base.repo -rw-r--r--. 1 root root 105 May 24 14:25 ambari.repo -rw-r--r--. 1 root root 471 May 24 14:44 hdp.repo -rw-r--r--. 1 root root 92 May 24 17:21 HDP.repo -rw-r--r--. 1 root root 135 May 24 17:21 HDP-UTILS.repo drwxr-xr-x. 194 root root 12288 May 24 18:39 .. drwxr-xr-x. 2 root root 4096 May 25 10:15 .

After delete both extra files and re-run the Ambari Cluster Install Wizard,

this time I manage to pass the "Slider Client Install" and receive the message below :

Command completed successfully!

However, a new error is encountered for "Zeppelin Notebook Install"

and failure messages are shown in below :

stderr: /var/lib/ambari-agent/data/errors-189.txt
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/master.py", line 312, in <module>
    Master().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/ZEPPELIN/0.6.0.2.5/package/scripts/master.py", line 47, in install
    self.install_packages(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 567, in install_packages
    retry_count=agent_stack_retry_count)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install
    self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 49, in install_package
    self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 83, in checked_call_with_retries
    return self._call_with_retries(cmd, is_checked=True, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 91, in _call_with_retries
    code, out = func(cmd, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install zeppelin_2_5_0_0_1245' returned 1.  One of the configured repositories failed (HDP-UTILS-1.1.0.21),
 and yum doesn't have enough cached data to continue. At this point the only
 safe thing yum can do is fail. There are a few ways to work "fix" this:

     1. Contact the upstream for the repository and get them to fix the problem.

     2. Reconfigure the baseurl/etc. for the repository, to point to a working
        upstream. This is most often useful if you are using a newer
        distribution release than is supported by the repository (and the
        packages for the previous distribution release still work).

     3. Run the command with the repository temporarily disabled
            yum --disablerepo=HDP-UTILS-1.1.0.21 ...

     4. Disable the repository permanently, so yum won't use it by default. Yum
        will then just ignore the repository until you permanently enable it
        again or use --enablerepo for temporary usage:

            yum-config-manager --disable HDP-UTILS-1.1.0.21
        or
            subscription-manager repos --disable=HDP-UTILS-1.1.0.21

     5. Configure the failing repository to be skipped, if it is unavailable.
        Note that yum will try to contact the repo. when it runs most commands,
        so will have to try and fail each time (and thus. yum will be be much
        slower). If it is a very temporary problem though, this is often a nice
        compromise:

            yum-config-manager --save --setopt=HDP-UTILS-1.1.0.21.skip_if_unavailable=true

failure: repodata/484967294295f252b27005cf4d8fe6959494917188577a3795135ae247cff517-filelists.sqlite.bz2 from HDP-UTILS-1.1.0.21: [Errno 256] No more mirrors to try.
http://10.95.70.117/HDP-UTILS-1.1.0.21/repos/centos7/repodata/484967294295f252b27005cf4d8fe695949491... [Errno 14] HTTP Error 404 - Not Found
stdout: /var/lib/ambari-agent/data/output-189.txt
2017-05-25 12:11:18,465 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-05-25 12:11:18,468 - Group['livy'] {}
2017-05-25 12:11:18,469 - Group['spark'] {}
2017-05-25 12:11:18,470 - Group['zeppelin'] {}
2017-05-25 12:11:18,470 - Group['hadoop'] {}
2017-05-25 12:11:18,470 - Group['users'] {}
2017-05-25 12:11:18,470 - Group['knox'] {}
2017-05-25 12:11:18,471 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-25 12:11:18,472 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-25 12:11:18,473 - User['atlas'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-25 12:11:18,474 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-25 12:11:18,475 - User['falcon'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-05-25 12:11:18,476 - User['accumulo'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-25 12:11:18,477 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-25 12:11:18,478 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-25 12:11:18,479 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-25 12:11:18,481 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-25 12:11:18,482 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-25 12:11:18,483 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-25 12:11:18,484 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-05-25 12:11:18,485 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-05-25 12:11:18,486 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-25 12:11:18,487 - User['logsearch'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-25 12:11:18,488 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-25 12:11:18,489 - User['mahout'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-25 12:11:18,490 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-05-25 12:11:18,491 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-25 12:11:18,492 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-25 12:11:18,494 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-25 12:11:18,495 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-25 12:11:18,496 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-25 12:11:18,497 - User['knox'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-05-25 12:11:18,498 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-05-25 12:11:18,500 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-05-25 12:11:18,506 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-05-25 12:11:18,507 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-05-25 12:11:18,508 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-05-25 12:11:18,509 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-05-25 12:11:18,515 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-05-25 12:11:18,516 - Group['hdfs'] {}
2017-05-25 12:11:18,517 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-05-25 12:11:18,518 - FS Type: 
2017-05-25 12:11:18,518 - Directory['/etc/hadoop'] {'mode': 0755}
2017-05-25 12:11:18,544 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-05-25 12:11:18,545 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-05-25 12:11:18,567 - Initializing 2 repositories
2017-05-25 12:11:18,568 - Repository['HDP-2.5'] {'base_url': 'http://10.95.70.117/HDP/centos7/', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-05-25 12:11:18,579 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://10.95.70.117/HDP/centos7/\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-05-25 12:11:18,580 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://10.95.70.117/HDP-UTILS-1.1.0.21/repos/centos7/', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-05-25 12:11:18,585 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://10.95.70.117/HDP-UTILS-1.1.0.21/repos/centos7/\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-05-25 12:11:18,585 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-25 12:11:18,714 - Skipping installation of existing package unzip
2017-05-25 12:11:18,714 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-25 12:11:18,747 - Skipping installation of existing package curl
2017-05-25 12:11:18,747 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-25 12:11:18,778 - Skipping installation of existing package hdp-select
2017-05-25 12:11:19,126 - call['ambari-python-wrap /usr/bin/hdp-select status spark-client'] {'timeout': 20}
2017-05-25 12:11:19,156 - call returned (0, 'spark-client - 2.5.0.0-1245')
2017-05-25 12:11:19,173 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-05-25 12:11:19,177 - Package['zeppelin_2_5_0_0_1245'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-05-25 12:11:19,311 - Installing package zeppelin_2_5_0_0_1245 ('/usr/bin/yum -d 0 -e 0 -y install zeppelin_2_5_0_0_1245')

Command failed after 1 tries

May I know what go wrong ?

Highlighted

Re: Issues with Ambari Cluster Install Wizard > Install, Start and Test > on CentOS 7.3

Super Mentor

@john li

This is because your Custom Offline Yum repository does not have the following dependency:

http://10.95.70.117/HDP-UTILS-1.1.0.21/repos/centos7/repodata/484967294295f252b27005cf4d8fe695949491... [Errno 14] HTTP Error 404 - Not Found

.

Hence you are getting 404 error. It indicates that either your Offline repo is not properly configured or it might have some missing packages.

Please try fetching new HDP Repo (tar.gz) file from : http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-installation/content/hdp_25_reposi...

.

Also you may refer to : http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-installation/content/getting_start...

Highlighted

Re: Issues with Ambari Cluster Install Wizard > Install, Start and Test > on CentOS 7.3

Explorer

Hi Jay ,

Thanks for your quick reply !

I did follow your URL

: http://docs.hortonworks.com/HDPDocuments/Ambari-2.4.2.0/bk_ambari-installation/content/hdp_25_reposi...

and then download the tarball from the specified section for the CentOS 7.

http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7/HDP-UTILS-1.1.0.21-centos7.tar...

I also carefully checked the Original file "HDP-UTILS-1.1.0.21-centos7.tar.gz" downloaded from this official Hortonworks URL wget -nv http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7/HDP-UTILS-1.1.0.21-centos7.tar... which only has the following 7 files under the sub-folder "repodata" In other words, the below bz2 file is not available from HortonWorks 484967294295f252b27005cf4d8fe6959494917188577a3795135ae247cff517-filelists.sqlite.bz2 The only bz2 file with the suffix "-filelist.sqlite.bz2" is 6653e752ea78963b039e308c7fd6c46b0885b78a4fd0bd7845b9384332ba3507-filelists.sqlite.bz2 If the file "484967294295f252b27005cf4d8fe6959494917188577a3795135ae247cff517-filelists.sqlite.bz2" is a dependency, then Why it is not available from Hortonworks ?

Don't have an account?
Coming from Hortonworks? Activate your account here