<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Spark2 Client installation is failing in Ambari upgrade 2.6.5 to 3.1.4 in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/Spark2-Client-installation-is-failing-in-Ambari-upgrade-2-6/m-p/292372#M216060</link>
    <description>&lt;P&gt;Solved this issue by&amp;nbsp;running below commands on the corresponding node. You need root/sudo access for this.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1) yum list installed | grep spark2&lt;/P&gt;&lt;P&gt;2) yum-complete-transaction&lt;/P&gt;&lt;P&gt;3) yum remove spark2*&lt;/P&gt;&lt;P&gt;4) Goto Ambari and install Spark2 client again.&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; If still fails&amp;nbsp;just refreshed tez config then tried 1 more time of step 4.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This is issue is happening for almost to any component due to break/killed yum&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;yum remove spark2*&amp;nbsp;&lt;/STRONG&gt; &amp;nbsp;output looks like below&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Removed:&lt;BR /&gt;spark2.noarch 0:2.3.2.3.1.0.0-78.el7 spark2_3_0_0_0_1634-yarn-shuffle.noarch 0:2.3.1.3.0.0.0-1634&lt;BR /&gt;spark2_3_1_0_0_78.noarch 0:2.3.2.3.1.0.0-78 spark2_3_1_0_0_78-master.noarch 0:2.3.2.3.1.0.0-78&lt;BR /&gt;spark2_3_1_0_0_78-python.noarch 0:2.3.2.3.1.0.0-78 spark2_3_1_0_0_78-worker.noarch 0:2.3.2.3.1.0.0-78&lt;BR /&gt;spark2_3_1_0_0_78-yarn-shuffle.noarch 0:2.3.2.3.1.0.0-78 spark2_3_1_4_0_315.noarch 0:2.3.2.3.1.4.0-315&lt;BR /&gt;spark2_3_1_4_0_315-python.noarch 0:2.3.2.3.1.4.0-315 spark2_3_1_4_0_315-yarn-shuffle.noarch 0:2.3.2.3.1.4.0-315&lt;/P&gt;&lt;P&gt;Dependency Removed:&lt;BR /&gt;datafu_3_0_0_0_1634.noarch 0:1.3.0.3.0.0.0-1634 hadoop_3_0_0_0_1634.x86_64 0:3.1.0.3.0.0.0-1634&lt;BR /&gt;hadoop_3_0_0_0_1634-client.x86_64 0:3.1.0.3.0.0.0-1634 hadoop_3_0_0_0_1634-hdfs.x86_64 0:3.1.0.3.0.0.0-1634&lt;BR /&gt;hadoop_3_0_0_0_1634-libhdfs.x86_64 0:3.1.0.3.0.0.0-1634 hadoop_3_0_0_0_1634-mapreduce.x86_64 0:3.1.0.3.0.0.0-1634&lt;BR /&gt;hadoop_3_0_0_0_1634-yarn.x86_64 0:3.1.0.3.0.0.0-1634 hadoop_3_1_0_0_78.x86_64 0:3.1.1.3.1.0.0-78&lt;BR /&gt;hadoop_3_1_0_0_78-client.x86_64 0:3.1.1.3.1.0.0-78 hadoop_3_1_0_0_78-hdfs.x86_64 0:3.1.1.3.1.0.0-78&lt;BR /&gt;hadoop_3_1_0_0_78-libhdfs.x86_64 0:3.1.1.3.1.0.0-78 hadoop_3_1_0_0_78-mapreduce.x86_64 0:3.1.1.3.1.0.0-78&lt;BR /&gt;hadoop_3_1_0_0_78-yarn.x86_64 0:3.1.1.3.1.0.0-78 hadoop_3_1_4_0_315.x86_64 0:3.1.1.3.1.4.0-315&lt;BR /&gt;hadoop_3_1_4_0_315-client.x86_64 0:3.1.1.3.1.4.0-315 hadoop_3_1_4_0_315-hdfs.x86_64 0:3.1.1.3.1.4.0-315&lt;BR /&gt;hadoop_3_1_4_0_315-libhdfs.x86_64 0:3.1.1.3.1.4.0-315 hadoop_3_1_4_0_315-mapreduce.x86_64 0:3.1.1.3.1.4.0-315&lt;BR /&gt;hadoop_3_1_4_0_315-yarn.x86_64 0:3.1.1.3.1.4.0-315 hbase_3_0_0_0_1634.noarch 0:2.0.0.3.0.0.0-1634&lt;BR /&gt;hbase_3_1_0_0_78.noarch 0:2.0.2.3.1.0.0-78 hbase_3_1_4_0_315.noarch 0:2.0.2.3.1.4.0-315&lt;BR /&gt;hive_3_0_0_0_1634.noarch 0:3.1.0.3.0.0.0-1634 hive_3_0_0_0_1634-hcatalog.noarch 0:3.1.0.3.0.0.0-1634&lt;BR /&gt;hive_3_0_0_0_1634-jdbc.noarch 0:3.1.0.3.0.0.0-1634 hive_3_0_0_0_1634-webhcat.noarch 0:3.1.0.3.0.0.0-1634&lt;BR /&gt;hive_3_1_0_0_78.noarch 0:3.1.0.3.1.0.0-78 hive_3_1_0_0_78-hcatalog.noarch 0:3.1.0.3.1.0.0-78&lt;BR /&gt;hive_3_1_0_0_78-jdbc.noarch 0:3.1.0.3.1.0.0-78 hive_3_1_4_0_315.noarch 0:3.1.0.3.1.4.0-315&lt;BR /&gt;hive_3_1_4_0_315-hcatalog.noarch 0:3.1.0.3.1.4.0-315 hive_3_1_4_0_315-jdbc.noarch 0:3.1.0.3.1.4.0-315&lt;BR /&gt;livy2_3_1_0_0_78.noarch 0:0.5.0.3.1.0.0-78 livy2_3_1_4_0_315.noarch 0:0.5.0.3.1.4.0-315&lt;BR /&gt;pig_3_0_0_0_1634.noarch 0:0.16.0.3.0.0.0-1634 tez_3_0_0_0_1634.noarch 0:0.9.1.3.0.0.0-1634&lt;BR /&gt;tez_3_1_0_0_78.noarch 0:0.9.1.3.1.0.0-78 tez_3_1_4_0_315.noarch 0:0.9.1.3.1.4.0-315&lt;/P&gt;&lt;P&gt;Installing package spark2_3_1_0_0_78 ('/usr/bin/yum -y install spark2_3_1_0_0_78')&lt;/P&gt;</description>
    <pubDate>Tue, 24 Mar 2020 06:38:01 GMT</pubDate>
    <dc:creator>rambabuch</dc:creator>
    <dc:date>2020-03-24T06:38:01Z</dc:date>
    <item>
      <title>Spark2 Client installation is failing in Ambari upgrade 2.6.5 to 3.1.4</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Spark2-Client-installation-is-failing-in-Ambari-upgrade-2-6/m-p/292310#M216025</link>
      <description>&lt;P&gt;Restart Spark2 Client&lt;BR /&gt;Task Log&lt;BR /&gt;stderr: /var/lib/ambari-agent/data/errors-8648.txt&lt;/P&gt;
&lt;P&gt;2020-03-20 16:05:33,015 - &lt;STRONG&gt;The 'spark2-client' component did not advertise a version. This may indicate a problem with the component packaging.&lt;/STRONG&gt;&lt;BR /&gt;Traceback (most recent call last):&lt;BR /&gt;File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/spark_client.py", line 55, in &amp;lt;module&amp;gt;&lt;BR /&gt;SparkClient().execute()&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute&lt;BR /&gt;method(env)&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 966, in restart&lt;BR /&gt;self.install(env)&lt;BR /&gt;File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/spark_client.py", line 35, in install&lt;BR /&gt;self.configure(env)&lt;BR /&gt;File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/spark_client.py", line 41, in configure&lt;BR /&gt;setup_spark(env, 'client', upgrade_type=upgrade_type, action = 'config')&lt;BR /&gt;File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/setup_spark.py", line 107, in setup_spark&lt;BR /&gt;mode=0644&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__&lt;BR /&gt;self.env.run()&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run&lt;BR /&gt;self.run_action(resource, action)&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action&lt;BR /&gt;provider_action()&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/properties_file.py", line 55, in action_create&lt;BR /&gt;encoding = self.resource.encoding,&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__&lt;BR /&gt;self.env.run()&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run&lt;BR /&gt;self.run_action(resource, action)&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action&lt;BR /&gt;provider_action()&lt;BR /&gt;File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 120, in action_create&lt;BR /&gt;raise Fail("Applying %s failed, parent directory %s doesn't exist" % (self.resource, dirname))&lt;BR /&gt;resource_management.core.exceptions.Fail: Applying File['/usr/hdp/current/spark2-client/conf/spark-defaults.conf'] failed, parent directory /usr/hdp/current/spark2-client/conf doesn't exist&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;stdout: /var/lib/ambari-agent/data/output-8648.txt&lt;/P&gt;
&lt;P&gt;2020-03-20 16:05:32,298 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -&amp;gt; 3.1.0.0-78&lt;BR /&gt;2020-03-20 16:05:32,313 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf&lt;BR /&gt;2020-03-20 16:05:32,483 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -&amp;gt; 3.1.0.0-78&lt;BR /&gt;2020-03-20 16:05:32,488 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf&lt;BR /&gt;2020-03-20 16:05:32,489 - Group['livy'] {}&lt;BR /&gt;2020-03-20 16:05:32,491 - Group['spark'] {}&lt;BR /&gt;2020-03-20 16:05:32,491 - Group['ranger'] {}&lt;BR /&gt;2020-03-20 16:05:32,491 - Group['hdfs'] {}&lt;BR /&gt;2020-03-20 16:05:32,491 - Group['hadoop'] {}&lt;BR /&gt;2020-03-20 16:05:32,491 - Group['users'] {}&lt;BR /&gt;2020-03-20 16:05:32,492 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}&lt;BR /&gt;2020-03-20 16:05:32,493 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}&lt;BR /&gt;2020-03-20 16:05:32,494 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}&lt;BR /&gt;2020-03-20 16:05:32,495 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}&lt;BR /&gt;2020-03-20 16:05:32,496 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger', 'hadoop'], 'uid': None}&lt;BR /&gt;2020-03-20 16:05:32,497 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}&lt;BR /&gt;2020-03-20 16:05:32,497 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None}&lt;BR /&gt;2020-03-20 16:05:32,498 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None}&lt;BR /&gt;2020-03-20 16:05:32,499 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}&lt;BR /&gt;2020-03-20 16:05:32,500 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}&lt;BR /&gt;2020-03-20 16:05:32,501 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}&lt;BR /&gt;2020-03-20 16:05:32,502 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}&lt;BR /&gt;2020-03-20 16:05:32,502 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}&lt;BR /&gt;2020-03-20 16:05:32,503 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}&lt;BR /&gt;2020-03-20 16:05:32,505 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}&lt;BR /&gt;2020-03-20 16:05:32,511 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if&lt;BR /&gt;2020-03-20 16:05:32,511 - Group['hdfs'] {}&lt;BR /&gt;2020-03-20 16:05:32,512 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}&lt;BR /&gt;2020-03-20 16:05:32,512 - FS Type: HDFS&lt;BR /&gt;2020-03-20 16:05:32,513 - Directory['/etc/hadoop'] {'mode': 0755}&lt;BR /&gt;2020-03-20 16:05:32,525 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}&lt;BR /&gt;2020-03-20 16:05:32,526 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}&lt;BR /&gt;2020-03-20 16:05:32,541 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce &amp;amp;&amp;amp; getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}&lt;BR /&gt;2020-03-20 16:05:32,549 - Skipping Execute[('setenforce', '0')] due to not_if&lt;BR /&gt;2020-03-20 16:05:32,550 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}&lt;BR /&gt;2020-03-20 16:05:32,552 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}&lt;BR /&gt;2020-03-20 16:05:32,552 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'}&lt;BR /&gt;2020-03-20 16:05:32,553 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}&lt;BR /&gt;2020-03-20 16:05:32,556 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}&lt;BR /&gt;2020-03-20 16:05:32,557 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}&lt;BR /&gt;2020-03-20 16:05:32,563 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}&lt;BR /&gt;2020-03-20 16:05:32,571 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}&lt;BR /&gt;2020-03-20 16:05:32,572 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}&lt;BR /&gt;2020-03-20 16:05:32,573 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}&lt;BR /&gt;2020-03-20 16:05:32,576 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}&lt;BR /&gt;2020-03-20 16:05:32,579 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}&lt;BR /&gt;2020-03-20 16:05:32,583 - Skipping unlimited key JCE policy check and setup since it is not required&lt;BR /&gt;2020-03-20 16:05:32,631 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}&lt;BR /&gt;2020-03-20 16:05:32,650 - call returned (0, '2.6.5.0-292\n2.6.5.1175-1\n3.0.0.0-1634\n3.0.1.0-187\n3.1.0.0-78\n3.1.4.0-315')&lt;BR /&gt;2020-03-20 16:05:32,715 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}&lt;BR /&gt;2020-03-20 16:05:32,735 - call returned (0, '2.6.5.0-292\n2.6.5.1175-1\n3.0.0.0-1634\n3.0.1.0-187\n3.1.0.0-78\n3.1.4.0-315')&lt;BR /&gt;2020-03-20 16:05:32,928 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf&lt;BR /&gt;2020-03-20 16:05:32,938 - Directory['/var/run/spark2'] {'owner': 'spark', 'create_parents': True, 'group': 'hadoop', 'mode': 0775}&lt;BR /&gt;2020-03-20 16:05:32,940 - Directory['/var/log/spark2'] {'owner': 'spark', 'group': 'hadoop', 'create_parents': True, 'mode': 0775}&lt;BR /&gt;2020-03-20 16:05:32,940 - PropertiesFile['/usr/hdp/current/spark2-client/conf/spark-defaults.conf'] {'owner': 'spark', 'key_value_delimiter': ' ', 'group': 'spark', 'mode': 0644, 'properties': ...}&lt;BR /&gt;2020-03-20 16:05:32,944 - Generating properties file: /usr/hdp/current/spark2-client/conf/spark-defaults.conf&lt;BR /&gt;2020-03-20 16:05:32,944 - File['/usr/hdp/current/spark2-client/conf/spark-defaults.conf'] {'owner': 'spark', 'content': InlineTemplate(...), 'group': 'spark', 'mode': 0644, 'encoding': 'UTF-8'}&lt;BR /&gt;2020-03-20 16:05:32,995 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}&lt;BR /&gt;2020-03-20 16:05:33,014 - call returned (0, '2.6.5.0-292\n2.6.5.1175-1\n3.0.0.0-1634\n3.0.1.0-187\n3.1.0.0-78\n3.1.4.0-315')&lt;BR /&gt;2020-03-20 16:05:33,015 - &lt;STRONG&gt;The 'spark2-client' component did not advertise a version. This may indicate a problem with the component packaging.&lt;/STRONG&gt;&lt;/P&gt;
&lt;P&gt;Command failed after 1 tries&lt;/P&gt;</description>
      <pubDate>Tue, 24 Mar 2020 10:01:36 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Spark2-Client-installation-is-failing-in-Ambari-upgrade-2-6/m-p/292310#M216025</guid>
      <dc:creator>rambabuch</dc:creator>
      <dc:date>2020-03-24T10:01:36Z</dc:date>
    </item>
    <item>
      <title>Re: Spark2 Client installation is failing in Ambari upgrade 2.6.5 to 3.1.4</title>
      <link>https://community.cloudera.com/t5/Support-Questions/Spark2-Client-installation-is-failing-in-Ambari-upgrade-2-6/m-p/292372#M216060</link>
      <description>&lt;P&gt;Solved this issue by&amp;nbsp;running below commands on the corresponding node. You need root/sudo access for this.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1) yum list installed | grep spark2&lt;/P&gt;&lt;P&gt;2) yum-complete-transaction&lt;/P&gt;&lt;P&gt;3) yum remove spark2*&lt;/P&gt;&lt;P&gt;4) Goto Ambari and install Spark2 client again.&lt;/P&gt;&lt;P&gt;&amp;nbsp; &amp;nbsp; If still fails&amp;nbsp;just refreshed tez config then tried 1 more time of step 4.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This is issue is happening for almost to any component due to break/killed yum&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;yum remove spark2*&amp;nbsp;&lt;/STRONG&gt; &amp;nbsp;output looks like below&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Removed:&lt;BR /&gt;spark2.noarch 0:2.3.2.3.1.0.0-78.el7 spark2_3_0_0_0_1634-yarn-shuffle.noarch 0:2.3.1.3.0.0.0-1634&lt;BR /&gt;spark2_3_1_0_0_78.noarch 0:2.3.2.3.1.0.0-78 spark2_3_1_0_0_78-master.noarch 0:2.3.2.3.1.0.0-78&lt;BR /&gt;spark2_3_1_0_0_78-python.noarch 0:2.3.2.3.1.0.0-78 spark2_3_1_0_0_78-worker.noarch 0:2.3.2.3.1.0.0-78&lt;BR /&gt;spark2_3_1_0_0_78-yarn-shuffle.noarch 0:2.3.2.3.1.0.0-78 spark2_3_1_4_0_315.noarch 0:2.3.2.3.1.4.0-315&lt;BR /&gt;spark2_3_1_4_0_315-python.noarch 0:2.3.2.3.1.4.0-315 spark2_3_1_4_0_315-yarn-shuffle.noarch 0:2.3.2.3.1.4.0-315&lt;/P&gt;&lt;P&gt;Dependency Removed:&lt;BR /&gt;datafu_3_0_0_0_1634.noarch 0:1.3.0.3.0.0.0-1634 hadoop_3_0_0_0_1634.x86_64 0:3.1.0.3.0.0.0-1634&lt;BR /&gt;hadoop_3_0_0_0_1634-client.x86_64 0:3.1.0.3.0.0.0-1634 hadoop_3_0_0_0_1634-hdfs.x86_64 0:3.1.0.3.0.0.0-1634&lt;BR /&gt;hadoop_3_0_0_0_1634-libhdfs.x86_64 0:3.1.0.3.0.0.0-1634 hadoop_3_0_0_0_1634-mapreduce.x86_64 0:3.1.0.3.0.0.0-1634&lt;BR /&gt;hadoop_3_0_0_0_1634-yarn.x86_64 0:3.1.0.3.0.0.0-1634 hadoop_3_1_0_0_78.x86_64 0:3.1.1.3.1.0.0-78&lt;BR /&gt;hadoop_3_1_0_0_78-client.x86_64 0:3.1.1.3.1.0.0-78 hadoop_3_1_0_0_78-hdfs.x86_64 0:3.1.1.3.1.0.0-78&lt;BR /&gt;hadoop_3_1_0_0_78-libhdfs.x86_64 0:3.1.1.3.1.0.0-78 hadoop_3_1_0_0_78-mapreduce.x86_64 0:3.1.1.3.1.0.0-78&lt;BR /&gt;hadoop_3_1_0_0_78-yarn.x86_64 0:3.1.1.3.1.0.0-78 hadoop_3_1_4_0_315.x86_64 0:3.1.1.3.1.4.0-315&lt;BR /&gt;hadoop_3_1_4_0_315-client.x86_64 0:3.1.1.3.1.4.0-315 hadoop_3_1_4_0_315-hdfs.x86_64 0:3.1.1.3.1.4.0-315&lt;BR /&gt;hadoop_3_1_4_0_315-libhdfs.x86_64 0:3.1.1.3.1.4.0-315 hadoop_3_1_4_0_315-mapreduce.x86_64 0:3.1.1.3.1.4.0-315&lt;BR /&gt;hadoop_3_1_4_0_315-yarn.x86_64 0:3.1.1.3.1.4.0-315 hbase_3_0_0_0_1634.noarch 0:2.0.0.3.0.0.0-1634&lt;BR /&gt;hbase_3_1_0_0_78.noarch 0:2.0.2.3.1.0.0-78 hbase_3_1_4_0_315.noarch 0:2.0.2.3.1.4.0-315&lt;BR /&gt;hive_3_0_0_0_1634.noarch 0:3.1.0.3.0.0.0-1634 hive_3_0_0_0_1634-hcatalog.noarch 0:3.1.0.3.0.0.0-1634&lt;BR /&gt;hive_3_0_0_0_1634-jdbc.noarch 0:3.1.0.3.0.0.0-1634 hive_3_0_0_0_1634-webhcat.noarch 0:3.1.0.3.0.0.0-1634&lt;BR /&gt;hive_3_1_0_0_78.noarch 0:3.1.0.3.1.0.0-78 hive_3_1_0_0_78-hcatalog.noarch 0:3.1.0.3.1.0.0-78&lt;BR /&gt;hive_3_1_0_0_78-jdbc.noarch 0:3.1.0.3.1.0.0-78 hive_3_1_4_0_315.noarch 0:3.1.0.3.1.4.0-315&lt;BR /&gt;hive_3_1_4_0_315-hcatalog.noarch 0:3.1.0.3.1.4.0-315 hive_3_1_4_0_315-jdbc.noarch 0:3.1.0.3.1.4.0-315&lt;BR /&gt;livy2_3_1_0_0_78.noarch 0:0.5.0.3.1.0.0-78 livy2_3_1_4_0_315.noarch 0:0.5.0.3.1.4.0-315&lt;BR /&gt;pig_3_0_0_0_1634.noarch 0:0.16.0.3.0.0.0-1634 tez_3_0_0_0_1634.noarch 0:0.9.1.3.0.0.0-1634&lt;BR /&gt;tez_3_1_0_0_78.noarch 0:0.9.1.3.1.0.0-78 tez_3_1_4_0_315.noarch 0:0.9.1.3.1.4.0-315&lt;/P&gt;&lt;P&gt;Installing package spark2_3_1_0_0_78 ('/usr/bin/yum -y install spark2_3_1_0_0_78')&lt;/P&gt;</description>
      <pubDate>Tue, 24 Mar 2020 06:38:01 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/Spark2-Client-installation-is-failing-in-Ambari-upgrade-2-6/m-p/292372#M216060</guid>
      <dc:creator>rambabuch</dc:creator>
      <dc:date>2020-03-24T06:38:01Z</dc:date>
    </item>
  </channel>
</rss>

