Member since
08-09-2017
11
Posts
0
Kudos Received
0
Solutions
09-04-2017
05:41 PM
Marked the yum.conf issue as the resolution. For the libtirc-devel issue the fix was: yum-config-manager --enable rhel-7-for-power-le-eus-optional-rpms yum-config-manager --enable rhel-7-rhel-7-for-power-le-optional-rpms Thanks ever so much to @Jay SenSharma and @Geoffrey Shelton Okot for all your help on this. Cluster install is currently at 19%.
... View more
09-04-2017
04:34 PM
I can see you have already answered similar questions to this on other threads. I am reading them now.
... View more
09-04-2017
04:33 PM
@Jay SenSharma We are getting closer... It now appears to be getting further BUT there's a problem with: 2017-09-04 17:21:49,354 - Execution of '/usr/bin/yum -d 0 -e 0 -y install hadoop_2_6_2_0_205-hdfs' returned 1. Error: Package: hadoop_2_6_2_0_205-hdfs-2.7.3.2.6.2.0-205.ppc64le (HDP-2.6)
Requires: libtirpc-devel I have this error on most of the nodes. Crucially, I also see this error from the CLI. Looks like this has been seen before (although this is an x86 reference): https://community.hortonworks.com/questions/96763/hdp-26-ambari-install-fails-on-rhel-7-on-libtirpc.html So trying to figure out how I either get this package or which ppc64 repo I am supposed to add from RHEL.
... View more
09-04-2017
03:39 PM
@Jay SenSharma I am convinced this is something to do with Ambari not going via the proxy. I can use wget to retrieve files from both repo locations (I have blanked IP address with x.x.x.x): [root@hdplab80 tmp]# wget http://public-repo-1.hortonworks.com/HDP/centos7-ppc/2.x/updates/2.6.2.0/repodata/repomd.xml
--2017-09-04 16:20:31-- http://public-repo-1.hortonworks.com/HDP/centos7-ppc/2.x/updates/2.6.2.0/repodata/repomd.xml
Connecting to x.x.x.x:3128... connected.
Proxy request sent, awaiting response... 200 OK
Length: 2989 (2.9K) [text/xml]
Saving to: ‘repomd.xml’
100%[=========================================================================>] 2,989 --.-K/s in 0s
2017-09-04 16:20:31 (898 MB/s) - ‘repomd.xml’ saved [2989/2989]
[root@hdplab80 tmp]# rm repomd.xml
rm: remove regular file ‘repomd.xml’? y
[root@hdplab80 tmp]# wget http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/ppc64le/repodata/repomd.xml
--2017-09-04 16:20:53-- http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/ppc64le/repodata/repomd.xml
Connecting to x.x.x.x:3128... connected.
Proxy request sent, awaiting response... 200 OK
Length: 2583 (2.5K) [text/xml]
Saving to: ‘repomd.xml’
100%[=========================================================================>] 2,583 --.-K/s in 0s
2017-09-04 16:20:53 (822 MB/s) - ‘repomd.xml’ saved [2583/2583] Here's the output to the greps: [root@hdplab80 ~]# grep 'proxy' /etc/yum.conf
[root@hdplab80 ~]# grep 'proxy' ~/.bash_profile
[root@hdplab80 ~]# grep 'proxy' ~/.profile
grep: /root/.profile: No such file or directory
[root@hdplab80 ~]# grep 'proxy' /var/lib/ambari-server/ambari-env.sh
export AMBARI_JVM_ARGS="$AMBARI_JVM_ARGS -Xms512m -Xmx2048m -XX:MaxPermSize=128m -Djava.security.auth.login.config=$ROOT/etc/ambari-server/conf/krb5JAASLogin.conf -Djava.security.krb5.conf=/etc/krb5.conf -Djavax.security.auth.useSubjectCredsOnly=false -Dhttp.proxyHost=x.x.x.x -Dhttp.proxyPort=3128"
I have my /etc/environment file configured as such: [root@hdplab80 ~]# cat /etc/environment
export ftp_proxy=http://x.x.x.x:3128/
export http_proxy=http://x.x.x.x:3128/
export ntp_proxy=http://x.x.x.x:3128/
export https_proxy=http://x.x.x.x:3128/ I have also run all those exports from the CLI. The export http_proxy=http://localhost:80 Should I run that? Won't it break my proxy connection out?
... View more
09-04-2017
03:23 PM
I Have checked I can use wget to get the repomd.xml files for both repos. It works. [root@hdplab80 tmp]# wget http://public-repo-1.hortonworks.com/HDP/centos7-ppc/2.x/updates/2.6.2.0/repodata/repomd.xml
--2017-09-04 16:20:31-- http://public-repo-1.hortonworks.com/HDP/centos7-ppc/2.x/updates/2.6.2.0/repodata/repomd.xml
Connecting to x.x.x.x:3128... connected.
Proxy request sent, awaiting response... 200 OK
Length: 2989 (2.9K) [text/xml]
Saving to: ‘repomd.xml’
100%[=========================================================================>] 2,989 --.-K/s in 0s
2017-09-04 16:20:31 (898 MB/s) - ‘repomd.xml’ saved [2989/2989]
[root@hdplab80 tmp]# rm repomd.xml
rm: remove regular file ‘repomd.xml’? y
[root@hdplab80 tmp]# wget http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/ppc64le/repodata/repomd.xml
--2017-09-04 16:20:53-- http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/ppc64le/repodata/repomd.xml
Connecting to x.x.x.x:3128... connected.
Proxy request sent, awaiting response... 200 OK
Length: 2583 (2.5K) [text/xml]
Saving to: ‘repomd.xml’
100%[=========================================================================>] 2,583 --.-K/s in 0s
2017-09-04 16:20:53 (822 MB/s) - ‘repomd.xml’ saved [2583/2583]
... View more
09-04-2017
03:11 PM
Hi @Geoffrey Shelton Okot @Jay SenSharma I am back looking at this. Ambari now at 2.5.2.0. There is now a redhat-ppc7 public repo suggested by Ambari Which configures HDP.repo and HDP-UTILS.repo in /etc/yum.repos.d.. The contents of which are: [root@hdplab80 yum.repos.d]# cat HDP.repo
[HDP-2.6]
name=HDP-2.6
baseurl=http://public-repo-1.hortonworks.com/HDP/centos7-ppc/2.x/updates/2.6.2.0
path=/
enabled=1
gpgcheck=0
[root@hdplab80 yum.repos.d]# cat HDP-UTILS.repo
[HDP-UTILS-1.1.0.21]
name=HDP-UTILS-1.1.0.21
baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/ppc64le
path=/
enabled=1 However, when Ambari is attempt to perform the HDP software install, we are back to seeing problems with it being able to use the repos. 2017-09-04 15:35:05,569 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-09-04 15:35:05,590 - Installing package hdp-select ('/usr/bin/yum -d 0 -e 0 -y install hdp-select')
2017-09-04 15:42:11,982 - Execution of '/usr/bin/yum -d 0 -e 0 -y install hdp-select' returned 1. One of the configured repositories failed (HDP-2.6),
and yum doesn't have enough cached data to continue.## output cut ##failure: repodata/repomd.xml from HDP-2.6: [Errno 256] No more mirrors to try.
http://public-repo-1.hortonworks.com/HDP/centos7-ppc/2.x/updates/2.6.2.0/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to public-repo-1.hortonworks.com:80; Operation now in progress"
http://public-repo-1.hortonworks.com/HDP/centos7-ppc/2.x/updates/2.6.2.0/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to public-repo-1.hortonworks.com:80; Operation now in progress"
http://public-repo-1.hortonworks.com/HDP/centos7-ppc/2.x/updates/2.6.2.0/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to public-repo-1.hortonworks.com:80; Operation now in progress"
http://public-repo-1.hortonworks.com/HDP/centos7-ppc/2.x/updates/2.6.2.0/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to public-repo-1.hortonworks.com:80; Operation now in progress"
http://public-repo-1.hortonworks.com/HDP/centos7-ppc/2.x/updates/2.6.2.0/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to public-repo-1.hortonworks.com:80; Operation now in progress"
http://public-repo-1.hortonworks.com/HDP/centos7-ppc/2.x/updates/2.6.2.0/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to public-repo-1.hortonworks.com:80; Operation now in progress"
http://public-repo-1.hortonworks.com/HDP/centos7-ppc/2.x/updates/2.6.2.0/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to public-repo-1.hortonworks.com:80; Operation now in progress"
http://public-repo-1.hortonworks.com/HDP/centos7-ppc/2.x/updates/2.6.2.0/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to public-repo-1.hortonworks.com:80; Operation now in progress"
http://public-repo-1.hortonworks.com/HDP/centos7-ppc/2.x/updates/2.6.2.0/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to public-repo-1.hortonworks.com:80; Operation now in progress"
http://public-repo-1.hortonworks.com/HDP/centos7-ppc/2.x/updates/2.6.2.0/repodata/repomd.xml: [Errno 14] curl#7 - "Failed connect to public-repo-1.hortonworks.com:80; Operation now in progress"
2017-09-04 15:42:11,982 - Failed to install package hdp-select. Executing '/usr/bin/yum clean metadata'
2017-09-04 15:44:19,568 - Retrying to install package hdp-select after 30 seconds
Command aborted. Reason: 'Server considered task failed and automatically aborted it' But if I drop to the CLI and try to manually instally hdp-select, it works: [root@hdplab81 yum.repos.d]# yum install hdp-select
Loaded plugins: langpacks, product-id, search-disabled-repos, subscription-manager
rhel-7-for-power-le-rpms | 2.3 kB 00:00:00
Resolving Dependencies
--> Running transaction check
---> Package hdp-select.noarch 0:2.6.2.0-205 will be installed
--> Finished Dependency Resolution
Dependencies Resolved
===================================================================================================================
Package Arch Version Repository Size
===================================================================================================================
Installing:
hdp-select noarch 2.6.2.0-205 HDP-2.6 11 k
Transaction Summary
===================================================================================================================
Install 1 Package
Total download size: 11 k
Installed size: 29 k
Is this ok [y/d/N]:
I don't see why Ambari is failing to use a repo which is clearly working. Any help much appreciated (again). I am out of ideas. Thanks, John
... View more
08-22-2017
01:09 PM
@Geoffrey Shelton Okot Thanks for your continued support on this! Did as you said and now I get a very short error: Registering with the server...
Registration with the server failed. Even tried changing the /etc/python/cert-verification.cfg file to the below but that made no difference. I did cycle httpd, ambari-server, and ambari-agents but that made no difference. [https]
verify=disable If I try automatic install/registration, I get this error: Command start time 2017-08-22 13:33:40
('WARNING 2017-08-22 13:33:51,547 NetUtil.py:121 - Server at https://F.Q.D.N:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-08-22 13:33:51,547 HeartbeatHandlers.py:116 - Stop event received
INFO 2017-08-22 13:33:51,547 NetUtil.py:127 - Stop event received
INFO 2017-08-22 13:33:51,547 ExitHelper.py:53 - Performing cleanup before exiting...
INFO 2017-08-22 13:33:51,547 ExitHelper.py:67 - Cleanup finished, exiting with code:0
INFO 2017-08-22 13:33:54,517 main.py:264 - Agent died gracefully, exiting.
INFO 2017-08-22 13:33:54,517 ExitHelper.py:53 - Performing cleanup before exiting...
INFO 2017-08-22 13:33:54,955 main.py:126 - loglevel=logging.INFO
INFO 2017-08-22 13:33:54,955 main.py:126 - loglevel=logging.INFO
INFO 2017-08-22 13:33:54,955 main.py:126 - loglevel=logging.INFO
INFO 2017-08-22 13:33:54,959 DataCleaner.py:39 - Data cleanup thread started
INFO 2017-08-22 13:33:54,961 DataCleaner.py:120 - Data cleanup started
INFO 2017-08-22 13:33:54,975 DataCleaner.py:122 - Data cleanup finished
INFO 2017-08-22 13:33:55,014 PingPortListener.py:50 - Ping port listener started on port: 8670
INFO 2017-08-22 13:33:55,016 main.py:417 - Connecting to Ambari server at https://F.Q.D.N:8440 (x.x.x.x)
INFO 2017-08-22 13:33:55,016 NetUtil.py:67 - Connecting to https://F.Q.D.N:8440/ca
ERROR 2017-08-22 13:33:55,093 NetUtil.py:93 - [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)
ERROR 2017-08-22 13:33:55,093 NetUtil.py:94 - SSLError: Failed to connect. Please check openssl library versions.
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2017-08-22 13:33:55,095 NetUtil.py:121 - Server at https://F.Q.D.N:8440 is not reachable, sleeping for 10 seconds...
', None)
('WARNING 2017-08-22 13:33:51,547 NetUtil.py:121 - Server at https://F.Q.D.N:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-08-22 13:33:51,547 HeartbeatHandlers.py:116 - Stop event received
INFO 2017-08-22 13:33:51,547 NetUtil.py:127 - Stop event received
INFO 2017-08-22 13:33:51,547 ExitHelper.py:53 - Performing cleanup before exiting...
INFO 2017-08-22 13:33:51,547 ExitHelper.py:67 - Cleanup finished, exiting with code:0
INFO 2017-08-22 13:33:54,517 main.py:264 - Agent died gracefully, exiting.
INFO 2017-08-22 13:33:54,517 ExitHelper.py:53 - Performing cleanup before exiting...
INFO 2017-08-22 13:33:54,955 main.py:126 - loglevel=logging.INFO
INFO 2017-08-22 13:33:54,955 main.py:126 - loglevel=logging.INFO
INFO 2017-08-22 13:33:54,955 main.py:126 - loglevel=logging.INFO
INFO 2017-08-22 13:33:54,959 DataCleaner.py:39 - Data cleanup thread started
INFO 2017-08-22 13:33:54,961 DataCleaner.py:120 - Data cleanup started
INFO 2017-08-22 13:33:54,975 DataCleaner.py:122 - Data cleanup finished
INFO 2017-08-22 13:33:55,014 PingPortListener.py:50 - Ping port listener started on port: 8670
INFO 2017-08-22 13:33:55,016 main.py:417 - Connecting to Ambari server at https://F.Q.D.N:8440 (x.x.x.x)
INFO 2017-08-22 13:33:55,016 NetUtil.py:67 - Connecting to https://F.Q.D.N:8440/ca
ERROR 2017-08-22 13:33:55,093 NetUtil.py:93 - [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)
ERROR 2017-08-22 13:33:55,093 NetUtil.py:94 - SSLError: Failed to connect. Please check openssl library versions.
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2017-08-22 13:33:55,095 NetUtil.py:121 - Server at https://F.Q.D.N:8440 is not reachable, sleeping for 10 seconds...
', None) Not sure where to go with this next. Tempted to rebuild from scratch but can't help but think I'll hit the same problems.
... View more
08-21-2017
04:48 PM
Sorry, had to go and work on something less interesting... Came back to this today with a fresh head. Managed to configure the local repos for HDP and HDP-UTILS by pulling down the appropriate tar files. However, I am now having a problem I didn't have before. The "Confirm hosts" section is now failing. Looks like the ambari-server and ambari-agents are not talking to each other. Here's the error: ==========================
Running setup agent script...
==========================
Command start time 2017-08-21 19:04:11
('WARNING 2017-08-21 19:04:22,849 NetUtil.py:121 - Server at https://x.x.x.x:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-08-21 19:04:22,849 HeartbeatHandlers.py:116 - Stop event received
INFO 2017-08-21 19:04:22,849 NetUtil.py:127 - Stop event received
INFO 2017-08-21 19:04:22,849 ExitHelper.py:53 - Performing cleanup before exiting...
INFO 2017-08-21 19:04:22,849 ExitHelper.py:67 - Cleanup finished, exiting with code:0
INFO 2017-08-21 19:04:23,092 main.py:264 - Agent died gracefully, exiting.
INFO 2017-08-21 19:04:23,092 ExitHelper.py:53 - Performing cleanup before exiting...
INFO 2017-08-21 19:04:23,536 main.py:126 - loglevel=logging.INFO
INFO 2017-08-21 19:04:23,537 main.py:126 - loglevel=logging.INFO
INFO 2017-08-21 19:04:23,537 main.py:126 - loglevel=logging.INFO
INFO 2017-08-21 19:04:23,539 DataCleaner.py:39 - Data cleanup thread started
INFO 2017-08-21 19:04:23,540 DataCleaner.py:120 - Data cleanup started
INFO 2017-08-21 19:04:23,551 DataCleaner.py:122 - Data cleanup finished
INFO 2017-08-21 19:04:23,593 PingPortListener.py:50 - Ping port listener started on port: 8670
INFO 2017-08-21 19:04:23,595 main.py:417 - Connecting to Ambari server at https://x.x.x.x:8440 (x.x.x.x)
INFO 2017-08-21 19:04:23,595 NetUtil.py:67 - Connecting to https://x.x.x.x:8440/ca
ERROR 2017-08-21 19:04:23,673 NetUtil.py:93 - [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)
ERROR 2017-08-21 19:04:23,673 NetUtil.py:94 - SSLError: Failed to connect. Please check openssl library versions.
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2017-08-21 19:04:23,675 NetUtil.py:121 - Server at https://x.x.x.x:8440 is not reachable, sleeping for 10 seconds...
', None)
('WARNING 2017-08-21 19:04:22,849 NetUtil.py:121 - Server at https://x.x.x.x:8440 is not reachable, sleeping for 10 seconds...
INFO 2017-08-21 19:04:22,849 HeartbeatHandlers.py:116 - Stop event received
INFO 2017-08-21 19:04:22,849 NetUtil.py:127 - Stop event received
INFO 2017-08-21 19:04:22,849 ExitHelper.py:53 - Performing cleanup before exiting...
INFO 2017-08-21 19:04:22,849 ExitHelper.py:67 - Cleanup finished, exiting with code:0
INFO 2017-08-21 19:04:23,092 main.py:264 - Agent died gracefully, exiting.
INFO 2017-08-21 19:04:23,092 ExitHelper.py:53 - Performing cleanup before exiting...
INFO 2017-08-21 19:04:23,536 main.py:126 - loglevel=logging.INFO
INFO 2017-08-21 19:04:23,537 main.py:126 - loglevel=logging.INFO
INFO 2017-08-21 19:04:23,537 main.py:126 - loglevel=logging.INFO
INFO 2017-08-21 19:04:23,539 DataCleaner.py:39 - Data cleanup thread started
INFO 2017-08-21 19:04:23,540 DataCleaner.py:120 - Data cleanup started
INFO 2017-08-21 19:04:23,551 DataCleaner.py:122 - Data cleanup finished
INFO 2017-08-21 19:04:23,593 PingPortListener.py:50 - Ping port listener started on port: 8670
INFO 2017-08-21 19:04:23,595 main.py:417 - Connecting to Ambari server at https://x.x.x.x:8440 (x.x.x.x)
INFO 2017-08-21 19:04:23,595 NetUtil.py:67 - Connecting to https://x.x.x.x:8440/ca
ERROR 2017-08-21 19:04:23,673 NetUtil.py:93 - [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:579)
ERROR 2017-08-21 19:04:23,673 NetUtil.py:94 - SSLError: Failed to connect. Please check openssl library versions.
Refer to: https://bugzilla.redhat.com/show_bug.cgi?id=1022468 for more details.
WARNING 2017-08-21 19:04:23,675 NetUtil.py:121 - Server at https://x.x.x.x:8440 is not reachable, sleeping for 10 seconds...
', None) Not sure what's happened. I did run a 'yum update' on all nodes before retrying this. I've followed a procedure where I emptied the ambari server keys from the master node (note that I could not find any keys for the ambari agents which was interesting). So perhaps a certificate problem of some sort? I saw one thread which suggested it could be related to Python so here's my level of Python: [root@hdplab80 conf]# rpm -qa | grep ^python-libs
python-libs-2.7.5-58.el7.ppc64le Passwordless SSH between all the nodes is fine. Any ideas? Thanks.
... View more
08-10-2017
03:19 PM
@Jay SenSharma and @Geoffrey Shelton Okot Thank you very much for your replies. I'm not using the repos listed here: https://docs.hortonworks.com/HDPDocuments/Ambari-2.5.1.0/bk_ambari-installation-ppc/content/select_version.html (Remembering that I am installing this on ppc64le) The wget works so access to the repository is not a problem. However, I am still seeing failures when Ambari tries to do the install: 2017-08-10 17:45:40,457 - Initializing 2 repositories
2017-08-10 17:45:40,457 - Repository['HDP-2.6'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7-ppc/2.x/updates/2.6.1.0/hdp.repo', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-08-10 17:45:40,466 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.6]\nname=HDP-2.6\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7-ppc/2.x/updates/2.6.1.0/hdp.repo\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-08-10 17:45:40,467 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/ppc64le', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-08-10 17:45:40,471 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/ppc64le\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-08-10 17:45:40,471 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-08-10 17:45:40,631 - Skipping installation of existing package unzip
2017-08-10 17:45:40,631 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-08-10 17:45:40,653 - Skipping installation of existing package curl
2017-08-10 17:45:40,653 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-08-10 17:45:40,675 - Skipping installation of existing package hdp-select
2017-08-10 17:45:41,028 - Package['storm_2_6_0_0_598'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-08-10 17:45:41,274 - Installing package storm_2_6_0_0_598 ('/usr/bin/yum -d 0 -e 0 -y install storm_2_6_0_0_598')
Command aborted. Reason: 'Server considered task failed and automatically aborted it'
Command failed after 1 tries With these repositories looking like this: [root@hdplab81 yum.repos.d]# cat HDP.repo
[HDP-2.6]
name=HDP-2.6
baseurl=http://public-repo-1.hortonworks.com/HDP/centos7-ppc/2.x/updates/2.6.1.0/hdp.repo
path=/
enabled=1
gpgcheck=0
[root@hdplab81 yum.repos.d]# cat HDP-UTILS.repo
[HDP-UTILS-1.1.0.21]
name=HDP-UTILS-1.1.0.21
baseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/ppc64le
path=/
enabled=1
And, as before, Ambari fails but from the CLI I can install whatever package has failed from Ambari. Thanks for the help on this. Really appreciate it.
... View more
08-10-2017
09:56 AM
Hi @Geoffrey Shelton Okot, Well spotted. I switched both URLs to use public and then switched them both to use private. The result was that one or other other was then 404 not found. I've switched the URLs back to their originals now. So I still have the problem that Ambari is not able to use the repositories I have configured. But, if I drop to the command line, I can install whatever package the HDP installation is having a problem with. There must be some difference between how Ambari is using the repo and how I am using it from the CLI. Here's a log of one of the errors: 2017-08-10 12:44:06,487 - Initializing 2 repositories
2017-08-10 12:44:06,487 - Repository['HDP-2.6'] {'base_url': 'http://private-repo-1.hortonworks.com/HDP/centos7-ppc/2.x/updates/2.6.0.0-598', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-08-10 12:44:06,496 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.6]\nname=HDP-2.6\nbaseurl=http://private-repo-1.hortonworks.com/HDP/centos7-ppc/2.x/updates/2.6.0.0-598\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-08-10 12:44:06,497 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/ppc64le', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-08-10 12:44:06,501 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/ppc64le\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-08-10 12:44:06,501 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-08-10 12:44:06,660 - Skipping installation of existing package unzip
2017-08-10 12:44:06,660 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-08-10 12:44:06,682 - Skipping installation of existing package curl
2017-08-10 12:44:06,682 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-08-10 12:44:06,704 - Skipping installation of existing package hdp-select
2017-08-10 12:44:07,069 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-08-10 12:44:07,071 - Stack Feature Version Info: stack_version=2.6, version=None, current_cluster_version=None -> 2.6
2017-08-10 12:44:07,136 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-08-10 12:44:07,170 - checked_call['rpm -q --queryformat '%{version}-%{release}' hdp-select | sed -e 's/\.el[0-9]//g''] {'stderr': -1}
2017-08-10 12:44:07,266 - checked_call returned (0, '2.6.0.0-598', '')
2017-08-10 12:44:07,281 - Package['hadoop_2_6_0_0_598'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-08-10 12:44:07,439 - Skipping installation of existing package hadoop_2_6_0_0_598
2017-08-10 12:44:07,442 - Package['hadoop_2_6_0_0_598-client'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-08-10 12:44:07,464 - Skipping installation of existing package hadoop_2_6_0_0_598-client
2017-08-10 12:44:07,466 - Package['snappy'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-08-10 12:44:07,488 - Skipping installation of existing package snappy
2017-08-10 12:44:07,490 - Package['snappy-devel'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-08-10 12:44:07,513 - Skipping installation of existing package snappy-devel
2017-08-10 12:44:07,515 - Package['hadoop_2_6_0_0_598-libhdfs'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-08-10 12:44:07,538 - Installing package hadoop_2_6_0_0_598-libhdfs ('/usr/bin/yum -d 0 -e 0 -y install hadoop_2_6_0_0_598-libhdfs')
Command aborted. Reason: 'Server considered task failed and automatically aborted it'
Command failed after 1 tries The packages it is now skipping I have installed manually from the CLI. Obviously this is not a solution to the problem as there are hundreds of packages to install across all nodes. Any pointers appreciated.
... View more