Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Why does my Ambari create cluster Fail during Spark Client install?

Highlighted

Why does my Ambari create cluster Fail during Spark Client install?

New Contributor

2016-11-25 09:26:59,930 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2016-11-25 09:26:59,942 - Skipping installation of existing package hdp-select 2016-11-25 09:27:00,081 - Package['spark_2_5_0_0_1245'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2016-11-25 09:27:00,164 - Skipping installation of existing package spark_2_5_0_0_1245 2016-11-25 09:27:00,166 - Package['spark_2_5_0_0_1245-python'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2016-11-25 09:27:00,177 - Skipping installation of existing package spark_2_5_0_0_1245-python 2016-11-25 09:27:00,179 - Package['livy_2_5_0_0_1245'] {'retry_on_repo_unavailability': False, 'retry_count': 5} 2016-11-25 09:27:00,196 - Skipping installation of existing package livy_2_5_0_0_1245 2016-11-25 09:27:00,201 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf 2016-11-25 09:27:00,203 - call['ambari-python-wrap /usr/bin/hdp-select status spark-client'] {'timeout': 20} 2016-11-25 09:27:00,225 - call returned (0, 'spark-client - 2.5.0.0-1245') 2016-11-25 09:27:00,228 - Directory['/var/run/spark'] {'owner': 'spark', 'create_parents': True, 'group': 'hadoop', 'mode': 0775} 2016-11-25 09:27:00,229 - Directory['/var/log/spark'] {'owner': 'spark', 'group': 'hadoop', 'create_parents': True, 'mode': 0775} 2016-11-25 09:27:00,229 - PropertiesFile['/usr/hdp/current/spark-client/conf/spark-defaults.conf'] {'owner': 'spark', 'key_value_delimiter': ' ', 'group': 'spark', 'mode': 0644, 'properties': ...} 2016-11-25 09:27:00,236 - Generating properties file: /usr/hdp/current/spark-client/conf/spark-defaults.conf 2016-11-25 09:27:00,237 - File['/usr/hdp/current/spark-client/conf/spark-defaults.conf'] {'owner': 'spark', 'content': InlineTemplate(...), 'group': 'spark', 'mode': 0644} Command failed after 1 tries

------ I have also been getting lots of those 'retry_on_repo_unavailability' errors, which I just try 'Rerun' when that happens

Any thoughts?

5 REPLIES 5

Re: Why does my Ambari create cluster Fail during Spark Client install?

Mentor

You can run manually to verify you have no issues with repo, it looks like it is missing from list.

yum repolist
yum install spark_2_5_0_0_1245 hdp-select


Re: Why does my Ambari create cluster Fail during Spark Client install?

New Contributor

Seems like that is not the problem:

[root@hwnode1 conf]# yum repolist Loaded plugins: fastestmirror, refresh-packagekit, security Repository HDP-UTILS-1.1.0.21 is listed more than once in the configuration Loading mirror speeds from cached hostfile * base: mirrors.usinternet.com * extras: ftpmirror.your.org * updates: mirror.netdepot.com repo id repo name status HDP-2.5 HDP-2.5 200 HDP-2.5.0.0 HDP Version - HDP-2.5.0.0 200 HDP-UTILS-1.1.0.21 HDP-UTILS-1.1.0.21 51 Updates-ambari-2.4.1.0 ambari-2.4.1.0 - Updates 12 base CentOS-6 - Base 6,696 extras CentOS-6 - Extras 62 updates CentOS-6 - Updates 670 repolist: 7,891

[root@hwnode1 conf]# yum install spark_2_5_0_0_1245 hdp-select Loaded plugins: fastestmirror, refresh-packagekit, security Setting up Install Process Repository HDP-UTILS-1.1.0.21 is listed more than once in the configuration Loading mirror speeds from cached hostfile * base: mirrors.usinternet.com * extras: ftpmirror.your.org * updates: mirror.netdepot.com Package spark_2_5_0_0_1245-1.6.2.2.5.0.0-1245.el6.noarch already installed and latest version Package hdp-select-2.5.0.0-1245.el6.noarch already installed and latest version Nothing to do [root@hwnode1 conf]#

Re: Why does my Ambari create cluster Fail during Spark Client install?

New Contributor

FYI - sometimes it works on node 4 but always seems to fail on node 2 - they are identical CentOS VMs

9778-screen-shot-2016-11-26-at-101833-am.png

Re: Why does my Ambari create cluster Fail during Spark Client install?

New Contributor

Always seems to fail on this:

2016-11-26 10:38:42,650 - PropertiesFile['/usr/hdp/current/spark-client/conf/spark-defaults.conf'] {'owner': 'spark', 'key_value_delimiter': ' ', 'group': 'spark', 'mode': 0644, 'properties': ...}
2016-11-26 10:38:42,656 - Generating properties file: /usr/hdp/current/spark-client/conf/spark-defaults.conf
2016-11-26 10:38:42,657 - File['/usr/hdp/current/spark-client/conf/spark-defaults.conf'] {'owner': 'spark', 'content': InlineTemplate(...), 'group': 'spark', 'mode': 0644}

Command failed after 1 tries

Re: Why does my Ambari create cluster Fail during Spark Client install?

New Contributor

facing similar issues with Elasticsearch, Any idea how to deal with this?

2017-06-05 14:09:56,270 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.3.0-37
2017-06-05 14:09:56,271 - Checking if need to create versioned conf dir /etc/hadoop/2.5.3.0-37/0
2017-06-05 14:09:56,271 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-06-05 14:09:56,314 - call returned (1, '/etc/hadoop/2.5.3.0-37/0 exist already', '')
2017-06-05 14:09:56,315 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-06-05 14:09:56,356 - checked_call returned (0, '')
2017-06-05 14:09:56,358 - Ensuring that hadoop has the correct symlink structure
2017-06-05 14:09:56,358 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-06-05 14:09:56,360 - Group['livy'] {}
2017-06-05 14:09:56,363 - Group['elasticsearch'] {}
2017-06-05 14:09:56,363 - Group['spark'] {}
2017-06-05 14:09:56,364 - Group['solr'] {}
2017-06-05 14:09:56,364 - Group['zeppelin'] {}
2017-06-05 14:09:56,364 - Group['hadoop'] {}
2017-06-05 14:09:56,364 - Group['users'] {}
2017-06-05 14:09:56,365 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-05 14:09:56,366 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-05 14:09:56,367 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-05 14:09:56,368 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-06-05 14:09:56,369 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-05 14:09:56,370 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-06-05 14:09:56,371 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-05 14:09:56,372 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-05 14:09:56,373 - User['elasticsearch'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-05 14:09:56,374 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-05 14:09:56,375 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-06-05 14:09:56,376 - User['solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-05 14:09:56,377 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-05 14:09:56,378 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-05 14:09:56,379 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-05 14:09:56,380 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-05 14:09:56,381 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-05 14:09:56,382 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-05 14:09:56,383 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-06-05 14:09:56,384 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-06-05 14:09:56,387 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-06-05 14:09:56,395 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-06-05 14:09:56,396 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-06-05 14:09:56,397 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-06-05 14:09:56,399 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-06-05 14:09:56,407 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-06-05 14:09:56,407 - Group['hdfs'] {}
2017-06-05 14:09:56,408 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2017-06-05 14:09:56,409 - FS Type: 
2017-06-05 14:09:56,409 - Directory['/etc/hadoop'] {'mode': 0755}
2017-06-05 14:09:56,435 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-06-05 14:09:56,436 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-06-05 14:09:56,459 - Initializing 2 repositories
2017-06-05 14:09:56,460 - Repository['HDP-2.5'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0', 'action': ['create'], 'components': ['HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-06-05 14:09:56,473 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-06-05 14:09:56,475 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-06-05 14:09:56,481 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-06-05 14:09:56,481 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-05 14:09:56,634 - Skipping installation of existing package unzip
2017-06-05 14:09:56,635 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-05 14:09:56,656 - Skipping installation of existing package curl
2017-06-05 14:09:56,656 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-05 14:09:56,674 - Skipping installation of existing package hdp-select
2017-06-05 14:09:56,907 - Install ES Master Node
2017-06-05 14:09:56,908 - Version 2.5.3.0-37 was provided as effective cluster version.  Using package version 2_5_3_0_37
2017-06-05 14:09:56,911 - Package['elasticsearch-2.3.3'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-05 14:09:57,063 - Installing package elasticsearch-2.3.3 ('/usr/bin/yum -d 0 -e 0 -y install elasticsearch-2.3.3')

Command failed after 1 tries