Support Questions
Find answers, ask questions, and share your expertise

Spark installation fails in HDP 2.6.4.0 with Ambari 2.6.1.5 on Centos7

New Contributor

We are installing HDP 2.6.4.0 with Ambari 2.6.1.5 on Centos7 using openstack images.

However during the cluster installation we get the following errors regarding to Spark History Server installation;

stderr: 
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/SPARK/1.2.1/package/scripts/job_history_server.py", line 98, in <module>
    JobHistoryServer().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 375, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/SPARK/1.2.1/package/scripts/job_history_server.py", line 42, in install
    self.install_packages(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 811, in install_packages
    name = self.format_package_name(package['name'])
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 546, in format_package_name
    raise Fail("Cannot match package for regexp name {0}. Available packages: {1}".format(name, self.available_packages_in_repos))
resource_management.core.exceptions.Fail: Cannot match package for regexp name spark_${stack_version}. Available packages: ['accumulo', 'accumulo-conf-standalone', 'accumulo-source', 'accumulo_2_6_4_0_91', 'accumulo_2_6_4_0_91-conf-standalone', 'accumulo_2_6_4_0_91-source', 'atlas-metadata', 'atlas-metadata-falcon-plugin', 'atlas-metadata-hive-plugin', 'atlas-metadata-sqoop-plugin', 'atlas-metadata-storm-plugin', 'atlas-metadata_2_6_4_0_91', 'atlas-metadata_2_6_4_0_91-sqoop-plugin', 'atlas-metadata_2_6_4_0_91-storm-plugin', 'datafu', 'datafu_2_6_4_0_91', 'druid', 'druid_2_6_4_0_91', 'falcon', 'falcon-doc', 'falcon_2_6_4_0_91-doc', 'flume', 'flume-agent', 'flume_2_6_4_0_91-agent', 'hadoop', 'hadoop-client', 'hadoop-conf-pseudo', 'hadoop-doc', 'hadoop-hdfs', 'hadoop-hdfs-datanode', 'hadoop-hdfs-fuse', 'hadoop-hdfs-journalnode', 'hadoop-hdfs-namenode', 'hadoop-hdfs-secondarynamenode', 'hadoop-hdfs-zkfc', 'hadoop-httpfs', 'hadoop-httpfs-server', 'hadoop-libhdfs', 'hadoop-mapreduce', 'hadoop-mapreduce-historyserver', 'hadoop-source', 'hadoop-yarn', 'hadoop-yarn-nodemanager', 'hadoop-yarn-proxyserver', 'hadoop-yarn-resourcemanager', 'hadoop-yarn-timelineserver', 'hadoop_2_6_4_0_91-conf-pseudo', 'hadoop_2_6_4_0_91-doc', 'hadoop_2_6_4_0_91-hdfs-datanode', 'hadoop_2_6_4_0_91-hdfs-fuse', 'hadoop_2_6_4_0_91-hdfs-journalnode', 'hadoop_2_6_4_0_91-hdfs-namenode', 'hadoop_2_6_4_0_91-hdfs-secondarynamenode', 'hadoop_2_6_4_0_91-hdfs-zkfc', 'hadoop_2_6_4_0_91-httpfs', 'hadoop_2_6_4_0_91-httpfs-server', 'hadoop_2_6_4_0_91-mapreduce-historyserver', 'hadoop_2_6_4_0_91-source', 'hadoop_2_6_4_0_91-yarn-nodemanager', 'hadoop_2_6_4_0_91-yarn-proxyserver', 'hadoop_2_6_4_0_91-yarn-resourcemanager', 'hadoop_2_6_4_0_91-yarn-timelineserver', 'hbase', 'hbase-doc', 'hbase-master', 'hbase-regionserver', 'hbase-rest', 'hbase-thrift', 'hbase-thrift2', 'hbase_2_6_4_0_91-doc', 'hbase_2_6_4_0_91-master', 'hbase_2_6_4_0_91-regionserver', 'hbase_2_6_4_0_91-rest', 'hbase_2_6_4_0_91-thrift', 'hbase_2_6_4_0_91-thrift2', 'hive', 'hive-hcatalog', 'hive-hcatalog-server', 'hive-jdbc', 'hive-metastore', 'hive-server', 'hive-server2', 'hive-webhcat', 'hive-webhcat-server', 'hive2', 'hive2-jdbc', 'hive_2_6_4_0_91-hcatalog-server', 'hive_2_6_4_0_91-metastore', 'hive_2_6_4_0_91-server', 'hive_2_6_4_0_91-server2', 'hive_2_6_4_0_91-webhcat-server', 'hue', 'hue-beeswax', 'hue-common', 'hue-hcatalog', 'hue-oozie', 'hue-pig', 'hue-server', 'kafka', 'knox', 'knox_2_6_4_0_91', 'livy', 'livy2', 'livy2_2_6_4_0_91', 'livy_2_6_4_0_91', 'mahout', 'mahout-doc', 'mahout_2_6_4_0_91', 'mahout_2_6_4_0_91-doc', 'oozie', 'oozie-client', 'oozie-common', 'oozie-sharelib', 'oozie-sharelib-distcp', 'oozie-sharelib-hcatalog', 'oozie-sharelib-hive', 'oozie-sharelib-hive2', 'oozie-sharelib-mapreduce-streaming', 'oozie-sharelib-pig', 'oozie-sharelib-spark', 'oozie-sharelib-sqoop', 'oozie-webapp', 'phoenix', 'phoenix_2_6_4_0_91', 'pig', 'ranger-admin', 'ranger-atlas-plugin', 'ranger-hbase-plugin', 'ranger-hdfs-plugin', 'ranger-hive-plugin', 'ranger-kafka-plugin', 'ranger-kms', 'ranger-knox-plugin', 'ranger-solr-plugin', 'ranger-storm-plugin', 'ranger-tagsync', 'ranger-usersync', 'ranger-yarn-plugin', 'ranger_2_6_4_0_91-admin', 'ranger_2_6_4_0_91-atlas-plugin', 'ranger_2_6_4_0_91-kms', 'ranger_2_6_4_0_91-knox-plugin', 'ranger_2_6_4_0_91-solr-plugin', 'ranger_2_6_4_0_91-storm-plugin', 'ranger_2_6_4_0_91-tagsync', 'ranger_2_6_4_0_91-usersync', 'shc', 'shc_2_6_4_0_91', 'slider', 'slider_2_6_4_0_91', 'spark', 'spark-master', 'spark-python', 'spark-worker', 'spark-yarn-shuffle', 'spark2', 'spark2-master', 'spark2-python', 'spark2-worker', 'spark2-yarn-shuffle', 'spark2_2_6_4_0_91', 'spark2_2_6_4_0_91-master', 'spark2_2_6_4_0_91-python', 'spark2_2_6_4_0_91-worker', 'spark_2_6_4_0_91-master', 'spark_2_6_4_0_91-python', 'spark_2_6_4_0_91-worker', 'spark_llap', 'spark_llap_2_6_4_0_91', 'sqoop', 'sqoop-metastore', 'sqoop_2_6_4_0_91', 'sqoop_2_6_4_0_91-metastore', 'storm', 'storm-slider-client', 'storm_2_6_4_0_91', 'storm_2_6_4_0_91-slider-client', 'superset', 'superset_2_6_4_0_91', 'tez', 'tez_hive2', 'zeppelin', 'zeppelin_2_6_4_0_91', 'zookeeper', 'zookeeper-server', 'openblas', 'openblas-Rblas', 'openblas-devel', 'openblas-openmp', 'openblas-openmp64', 'openblas-openmp64_', 'openblas-serial64', 'openblas-serial64_', 'openblas-static', 'openblas-threads', 'openblas-threads64', 'openblas-threads64_', 'snappy', 'snappy-devel', 'atlas-metadata_2_6_4_0_91-falcon-plugin', 'atlas-metadata_2_6_4_0_91-hive-plugin', 'bigtop-jsvc', 'bigtop-tomcat', 'falcon_2_6_4_0_91', 'flume_2_6_4_0_91', 'hadoop_2_6_4_0_91', 'hadoop_2_6_4_0_91-client', 'hadoop_2_6_4_0_91-hdfs', 'hadoop_2_6_4_0_91-libhdfs', 'hadoop_2_6_4_0_91-mapreduce', 'hadoop_2_6_4_0_91-yarn', 'hbase_2_6_4_0_91', 'hdp-select', 'hive2_2_6_4_0_91', 'hive2_2_6_4_0_91-jdbc', 'hive_2_6_4_0_91', 'hive_2_6_4_0_91-hcatalog', 'hive_2_6_4_0_91-jdbc', 'hive_2_6_4_0_91-webhcat', 'kafka_2_6_4_0_91', 'oozie_2_6_4_0_91', 'oozie_2_6_4_0_91-client', 'oozie_2_6_4_0_91-common', 'oozie_2_6_4_0_91-sharelib', 'oozie_2_6_4_0_91-sharelib-distcp', 'oozie_2_6_4_0_91-sharelib-hcatalog', 'oozie_2_6_4_0_91-sharelib-hive', 'oozie_2_6_4_0_91-sharelib-hive2', 'oozie_2_6_4_0_91-sharelib-mapreduce-streaming', 'oozie_2_6_4_0_91-sharelib-pig', 'oozie_2_6_4_0_91-sharelib-spark', 'oozie_2_6_4_0_91-sharelib-sqoop', 'oozie_2_6_4_0_91-webapp', 'pig_2_6_4_0_91', 'ranger_2_6_4_0_91-hbase-plugin', 'ranger_2_6_4_0_91-hdfs-plugin', 'ranger_2_6_4_0_91-hive-plugin', 'ranger_2_6_4_0_91-kafka-plugin', 'ranger_2_6_4_0_91-yarn-plugin', 'spark2_2_6_4_0_91-yarn-shuffle', 'spark_2_6_4_0_91-yarn-shuffle', 'tez_2_6_4_0_91', 'tez_hive2_2_6_4_0_91', 'zookeeper_2_6_4_0_91', 'zookeeper_2_6_4_0_91-server']
 stdout:
2018-03-26 08:38:33,091 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2018-03-26 08:38:33,103 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2018-03-26 08:38:33,106 - Group['livy'] {}
2018-03-26 08:38:33,107 - Group['spark'] {}
2018-03-26 08:38:33,107 - Group['hdfs'] {}
2018-03-26 08:38:33,108 - Group['hadoop'] {}
2018-03-26 08:38:33,108 - Group['users'] {}
2018-03-26 08:38:33,109 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-26 08:38:33,110 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-26 08:38:33,112 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-03-26 08:38:33,113 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-26 08:38:33,115 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-03-26 08:38:33,116 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-26 08:38:33,118 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-26 08:38:33,119 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users'], 'uid': None}
2018-03-26 08:38:33,120 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-26 08:38:33,122 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-26 08:38:33,123 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs'], 'uid': None}
2018-03-26 08:38:33,125 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-26 08:38:33,126 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-26 08:38:33,128 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-26 08:38:33,129 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-26 08:38:33,130 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop'], 'uid': None}
2018-03-26 08:38:33,131 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-03-26 08:38:33,134 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2018-03-26 08:38:33,142 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2018-03-26 08:38:33,143 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2018-03-26 08:38:33,144 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-03-26 08:38:33,146 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2018-03-26 08:38:33,147 - call['/var/lib/ambari-agent/tmp/changeUid.sh hbase'] {}
2018-03-26 08:38:33,160 - call returned (0, '1015')
2018-03-26 08:38:33,161 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1015'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2018-03-26 08:38:33,169 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase 1015'] due to not_if
2018-03-26 08:38:33,170 - Group['hdfs'] {}
2018-03-26 08:38:33,171 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', u'hdfs']}
2018-03-26 08:38:33,171 - FS Type: 
2018-03-26 08:38:33,172 - Directory['/etc/hadoop'] {'mode': 0755}
2018-03-26 08:38:33,199 - File['/usr/hdp/2.6.4.0-91/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2018-03-26 08:38:33,200 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2018-03-26 08:38:33,223 - Repository['HDP-2.6-repo-1'] {'append_to_file': False, 'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.4.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2018-03-26 08:38:33,235 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.4.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-03-26 08:38:33,236 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2018-03-26 08:38:33,236 - Repository with url http://public-repo-1.hortonworks.com/HDP-GPL/centos7/2.x/updates/2.6.4.0 is not created due to its tags: set([u'GPL'])
2018-03-26 08:38:33,236 - Repository['HDP-UTILS-1.1.0.22-repo-1'] {'append_to_file': True, 'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'ambari-hdp-1', 'mirror_list': None}
2018-03-26 08:38:33,241 - File['/etc/yum.repos.d/ambari-hdp-1.repo'] {'content': '[HDP-2.6-repo-1]\nname=HDP-2.6-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.4.0\n\npath=/\nenabled=1\ngpgcheck=0\n[HDP-UTILS-1.1.0.22-repo-1]\nname=HDP-UTILS-1.1.0.22-repo-1\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.22/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2018-03-26 08:38:33,242 - Writing File['/etc/yum.repos.d/ambari-hdp-1.repo'] because contents don't match
2018-03-26 08:38:33,242 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-03-26 08:38:33,364 - Skipping installation of existing package unzip
2018-03-26 08:38:33,365 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-03-26 08:38:33,379 - Skipping installation of existing package curl
2018-03-26 08:38:33,379 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2018-03-26 08:38:33,393 - Skipping installation of existing package hdp-select
2018-03-26 08:38:33,400 - The repository with version 2.6.4.0-91 for this command has been marked as resolved. It will be used to report the version of the component which was installed
2018-03-26 08:38:33,785 - Stack Feature Version Info: Cluster Stack=2.6, Command Stack=None, Command Version=None -> 2.6
2018-03-26 08:38:33,792 - Using hadoop conf dir: /usr/hdp/2.6.4.0-91/hadoop/conf
2018-03-26 08:38:33,809 - call['ambari-python-wrap /usr/bin/hdp-select status spark-client'] {'timeout': 20}
2018-03-26 08:38:33,842 - call returned (0, 'spark-client - 2.6.4.0-91')
2018-03-26 08:38:33,852 - Command repositories: HDP-2.6-repo-1, HDP-2.6-GPL-repo-1, HDP-UTILS-1.1.0.22-repo-1
2018-03-26 08:38:33,852 - Applicable repositories: HDP-2.6-repo-1, HDP-2.6-GPL-repo-1, HDP-UTILS-1.1.0.22-repo-1
2018-03-26 08:38:33,855 - Looking for matching packages in the following repositories: HDP-2.6-repo-1, HDP-2.6-GPL-repo-1, HDP-UTILS-1.1.0.22-repo-1
2018-03-26 08:38:37,919 - No package found for spark_${stack_version}(spark_(\d|_)+$)
2018-03-26 08:38:37,928 - The repository with version 2.6.4.0-91 for this command has been marked as resolved. It will be used to report the version of the component which was installed

Command failed after 1 tries

Is there any idea regarding to solve this problem ?

3 REPLIES 3

Super Mentor

@Yasemin Timar

This can happen if there is incomplete HDP binaries installation happened on that problematic host.

So please check if you have any pending transaction file present here?

# ls -lart /var/lib/yum/transaction* 


Also please check if you have any failed package installation issue noted here

# less /var/log/yum.log


Also please check the output of the following command to find out which version of HDP packages are installed on the system.

# yum list all 



Most probably we might need to uninstall those components and then install it back. As there are some similar issues reported here which causes the following error: https://issues.apache.org/jira/browse/AMBARI-22563

No package found for xxx_${stack_version}
OR
resource_management.core.exceptions.Fail: Cannot match package for regexp name spark_${stack_version}.


Here xxx_ can be any package like storm , spark, hadoop ...etc

Super Mentor

@Yasemin Timar

There are some additional fixes in the cluster deployment .. which will be available in later version of ambari like:

https://issues.apache.org/jira/browse/AMBARI-22888

.

Explorer

This can be issue with your repository,

This issue happens for my RHEL7.3

Using below command, I was able to solve this,

#yum clean all
#yum-config-manager --enable rhui-REGION-rhel-server-optional
#yum repolist
; ;