Support Questions

Find answers, ask questions, and share your expertise

Spark2 Client installation is failing in Ambari upgrade 2.6.5 to 3.1.4

avatar
Contributor

Restart Spark2 Client
Task Log
stderr: /var/lib/ambari-agent/data/errors-8648.txt

2020-03-20 16:05:33,015 - The 'spark2-client' component did not advertise a version. This may indicate a problem with the component packaging.
Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/spark_client.py", line 55, in <module>
SparkClient().execute()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 352, in execute
method(env)
File "/usr/lib/ambari-agent/lib/resource_management/libraries/script/script.py", line 966, in restart
self.install(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/spark_client.py", line 35, in install
self.configure(env)
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/spark_client.py", line 41, in configure
setup_spark(env, 'client', upgrade_type=upgrade_type, action = 'config')
File "/var/lib/ambari-agent/cache/stacks/HDP/3.0/services/SPARK2/package/scripts/setup_spark.py", line 107, in setup_spark
mode=0644
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/libraries/providers/properties_file.py", line 55, in action_create
encoding = self.resource.encoding,
File "/usr/lib/ambari-agent/lib/resource_management/core/base.py", line 166, in __init__
self.env.run()
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/ambari-agent/lib/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/ambari-agent/lib/resource_management/core/providers/system.py", line 120, in action_create
raise Fail("Applying %s failed, parent directory %s doesn't exist" % (self.resource, dirname))
resource_management.core.exceptions.Fail: Applying File['/usr/hdp/current/spark2-client/conf/spark-defaults.conf'] failed, parent directory /usr/hdp/current/spark2-client/conf doesn't exist


stdout: /var/lib/ambari-agent/data/output-8648.txt

2020-03-20 16:05:32,298 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2020-03-20 16:05:32,313 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2020-03-20 16:05:32,483 - Stack Feature Version Info: Cluster Stack=3.1, Command Stack=None, Command Version=3.1.0.0-78 -> 3.1.0.0-78
2020-03-20 16:05:32,488 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2020-03-20 16:05:32,489 - Group['livy'] {}
2020-03-20 16:05:32,491 - Group['spark'] {}
2020-03-20 16:05:32,491 - Group['ranger'] {}
2020-03-20 16:05:32,491 - Group['hdfs'] {}
2020-03-20 16:05:32,491 - Group['hadoop'] {}
2020-03-20 16:05:32,491 - Group['users'] {}
2020-03-20 16:05:32,492 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-03-20 16:05:32,493 - User['yarn-ats'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-03-20 16:05:32,494 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-03-20 16:05:32,495 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-03-20 16:05:32,496 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['ranger', 'hadoop'], 'uid': None}
2020-03-20 16:05:32,497 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2020-03-20 16:05:32,497 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['livy', 'hadoop'], 'uid': None}
2020-03-20 16:05:32,498 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['spark', 'hadoop'], 'uid': None}
2020-03-20 16:05:32,499 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'users'], 'uid': None}
2020-03-20 16:05:32,500 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-03-20 16:05:32,501 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop'], 'uid': None}
2020-03-20 16:05:32,502 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-03-20 16:05:32,502 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop'], 'uid': None}
2020-03-20 16:05:32,503 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2020-03-20 16:05:32,505 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2020-03-20 16:05:32,511 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa 0'] due to not_if
2020-03-20 16:05:32,511 - Group['hdfs'] {}
2020-03-20 16:05:32,512 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hdfs', 'hadoop', u'hdfs']}
2020-03-20 16:05:32,512 - FS Type: HDFS
2020-03-20 16:05:32,513 - Directory['/etc/hadoop'] {'mode': 0755}
2020-03-20 16:05:32,525 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2020-03-20 16:05:32,526 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2020-03-20 16:05:32,541 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2020-03-20 16:05:32,549 - Skipping Execute[('setenforce', '0')] due to not_if
2020-03-20 16:05:32,550 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2020-03-20 16:05:32,552 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2020-03-20 16:05:32,552 - Directory['/var/run/hadoop/hdfs'] {'owner': 'hdfs', 'cd_access': 'a'}
2020-03-20 16:05:32,553 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2020-03-20 16:05:32,556 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'hdfs'}
2020-03-20 16:05:32,557 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'hdfs'}
2020-03-20 16:05:32,563 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/log4j.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2020-03-20 16:05:32,571 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/hadoop-metrics2.properties'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2020-03-20 16:05:32,572 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2020-03-20 16:05:32,573 - File['/usr/hdp/3.1.0.0-78/hadoop/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2020-03-20 16:05:32,576 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop', 'mode': 0644}
2020-03-20 16:05:32,579 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2020-03-20 16:05:32,583 - Skipping unlimited key JCE policy check and setup since it is not required
2020-03-20 16:05:32,631 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}
2020-03-20 16:05:32,650 - call returned (0, '2.6.5.0-292\n2.6.5.1175-1\n3.0.0.0-1634\n3.0.1.0-187\n3.1.0.0-78\n3.1.4.0-315')
2020-03-20 16:05:32,715 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}
2020-03-20 16:05:32,735 - call returned (0, '2.6.5.0-292\n2.6.5.1175-1\n3.0.0.0-1634\n3.0.1.0-187\n3.1.0.0-78\n3.1.4.0-315')
2020-03-20 16:05:32,928 - Using hadoop conf dir: /usr/hdp/3.1.0.0-78/hadoop/conf
2020-03-20 16:05:32,938 - Directory['/var/run/spark2'] {'owner': 'spark', 'create_parents': True, 'group': 'hadoop', 'mode': 0775}
2020-03-20 16:05:32,940 - Directory['/var/log/spark2'] {'owner': 'spark', 'group': 'hadoop', 'create_parents': True, 'mode': 0775}
2020-03-20 16:05:32,940 - PropertiesFile['/usr/hdp/current/spark2-client/conf/spark-defaults.conf'] {'owner': 'spark', 'key_value_delimiter': ' ', 'group': 'spark', 'mode': 0644, 'properties': ...}
2020-03-20 16:05:32,944 - Generating properties file: /usr/hdp/current/spark2-client/conf/spark-defaults.conf
2020-03-20 16:05:32,944 - File['/usr/hdp/current/spark2-client/conf/spark-defaults.conf'] {'owner': 'spark', 'content': InlineTemplate(...), 'group': 'spark', 'mode': 0644, 'encoding': 'UTF-8'}
2020-03-20 16:05:32,995 - call[('ambari-python-wrap', u'/usr/bin/hdp-select', 'versions')] {}
2020-03-20 16:05:33,014 - call returned (0, '2.6.5.0-292\n2.6.5.1175-1\n3.0.0.0-1634\n3.0.1.0-187\n3.1.0.0-78\n3.1.4.0-315')
2020-03-20 16:05:33,015 - The 'spark2-client' component did not advertise a version. This may indicate a problem with the component packaging.

Command failed after 1 tries

1 ACCEPTED SOLUTION

avatar
Contributor

Solved this issue by running below commands on the corresponding node. You need root/sudo access for this.

 

1) yum list installed | grep spark2

2) yum-complete-transaction

3) yum remove spark2*

4) Goto Ambari and install Spark2 client again.

    If still fails just refreshed tez config then tried 1 more time of step 4.

 

This is issue is happening for almost to any component due to break/killed yum

 

yum remove spark2*   output looks like below 

 

Removed:
spark2.noarch 0:2.3.2.3.1.0.0-78.el7 spark2_3_0_0_0_1634-yarn-shuffle.noarch 0:2.3.1.3.0.0.0-1634
spark2_3_1_0_0_78.noarch 0:2.3.2.3.1.0.0-78 spark2_3_1_0_0_78-master.noarch 0:2.3.2.3.1.0.0-78
spark2_3_1_0_0_78-python.noarch 0:2.3.2.3.1.0.0-78 spark2_3_1_0_0_78-worker.noarch 0:2.3.2.3.1.0.0-78
spark2_3_1_0_0_78-yarn-shuffle.noarch 0:2.3.2.3.1.0.0-78 spark2_3_1_4_0_315.noarch 0:2.3.2.3.1.4.0-315
spark2_3_1_4_0_315-python.noarch 0:2.3.2.3.1.4.0-315 spark2_3_1_4_0_315-yarn-shuffle.noarch 0:2.3.2.3.1.4.0-315

Dependency Removed:
datafu_3_0_0_0_1634.noarch 0:1.3.0.3.0.0.0-1634 hadoop_3_0_0_0_1634.x86_64 0:3.1.0.3.0.0.0-1634
hadoop_3_0_0_0_1634-client.x86_64 0:3.1.0.3.0.0.0-1634 hadoop_3_0_0_0_1634-hdfs.x86_64 0:3.1.0.3.0.0.0-1634
hadoop_3_0_0_0_1634-libhdfs.x86_64 0:3.1.0.3.0.0.0-1634 hadoop_3_0_0_0_1634-mapreduce.x86_64 0:3.1.0.3.0.0.0-1634
hadoop_3_0_0_0_1634-yarn.x86_64 0:3.1.0.3.0.0.0-1634 hadoop_3_1_0_0_78.x86_64 0:3.1.1.3.1.0.0-78
hadoop_3_1_0_0_78-client.x86_64 0:3.1.1.3.1.0.0-78 hadoop_3_1_0_0_78-hdfs.x86_64 0:3.1.1.3.1.0.0-78
hadoop_3_1_0_0_78-libhdfs.x86_64 0:3.1.1.3.1.0.0-78 hadoop_3_1_0_0_78-mapreduce.x86_64 0:3.1.1.3.1.0.0-78
hadoop_3_1_0_0_78-yarn.x86_64 0:3.1.1.3.1.0.0-78 hadoop_3_1_4_0_315.x86_64 0:3.1.1.3.1.4.0-315
hadoop_3_1_4_0_315-client.x86_64 0:3.1.1.3.1.4.0-315 hadoop_3_1_4_0_315-hdfs.x86_64 0:3.1.1.3.1.4.0-315
hadoop_3_1_4_0_315-libhdfs.x86_64 0:3.1.1.3.1.4.0-315 hadoop_3_1_4_0_315-mapreduce.x86_64 0:3.1.1.3.1.4.0-315
hadoop_3_1_4_0_315-yarn.x86_64 0:3.1.1.3.1.4.0-315 hbase_3_0_0_0_1634.noarch 0:2.0.0.3.0.0.0-1634
hbase_3_1_0_0_78.noarch 0:2.0.2.3.1.0.0-78 hbase_3_1_4_0_315.noarch 0:2.0.2.3.1.4.0-315
hive_3_0_0_0_1634.noarch 0:3.1.0.3.0.0.0-1634 hive_3_0_0_0_1634-hcatalog.noarch 0:3.1.0.3.0.0.0-1634
hive_3_0_0_0_1634-jdbc.noarch 0:3.1.0.3.0.0.0-1634 hive_3_0_0_0_1634-webhcat.noarch 0:3.1.0.3.0.0.0-1634
hive_3_1_0_0_78.noarch 0:3.1.0.3.1.0.0-78 hive_3_1_0_0_78-hcatalog.noarch 0:3.1.0.3.1.0.0-78
hive_3_1_0_0_78-jdbc.noarch 0:3.1.0.3.1.0.0-78 hive_3_1_4_0_315.noarch 0:3.1.0.3.1.4.0-315
hive_3_1_4_0_315-hcatalog.noarch 0:3.1.0.3.1.4.0-315 hive_3_1_4_0_315-jdbc.noarch 0:3.1.0.3.1.4.0-315
livy2_3_1_0_0_78.noarch 0:0.5.0.3.1.0.0-78 livy2_3_1_4_0_315.noarch 0:0.5.0.3.1.4.0-315
pig_3_0_0_0_1634.noarch 0:0.16.0.3.0.0.0-1634 tez_3_0_0_0_1634.noarch 0:0.9.1.3.0.0.0-1634
tez_3_1_0_0_78.noarch 0:0.9.1.3.1.0.0-78 tez_3_1_4_0_315.noarch 0:0.9.1.3.1.4.0-315

Installing package spark2_3_1_0_0_78 ('/usr/bin/yum -y install spark2_3_1_0_0_78')

View solution in original post

1 REPLY 1

avatar
Contributor

Solved this issue by running below commands on the corresponding node. You need root/sudo access for this.

 

1) yum list installed | grep spark2

2) yum-complete-transaction

3) yum remove spark2*

4) Goto Ambari and install Spark2 client again.

    If still fails just refreshed tez config then tried 1 more time of step 4.

 

This is issue is happening for almost to any component due to break/killed yum

 

yum remove spark2*   output looks like below 

 

Removed:
spark2.noarch 0:2.3.2.3.1.0.0-78.el7 spark2_3_0_0_0_1634-yarn-shuffle.noarch 0:2.3.1.3.0.0.0-1634
spark2_3_1_0_0_78.noarch 0:2.3.2.3.1.0.0-78 spark2_3_1_0_0_78-master.noarch 0:2.3.2.3.1.0.0-78
spark2_3_1_0_0_78-python.noarch 0:2.3.2.3.1.0.0-78 spark2_3_1_0_0_78-worker.noarch 0:2.3.2.3.1.0.0-78
spark2_3_1_0_0_78-yarn-shuffle.noarch 0:2.3.2.3.1.0.0-78 spark2_3_1_4_0_315.noarch 0:2.3.2.3.1.4.0-315
spark2_3_1_4_0_315-python.noarch 0:2.3.2.3.1.4.0-315 spark2_3_1_4_0_315-yarn-shuffle.noarch 0:2.3.2.3.1.4.0-315

Dependency Removed:
datafu_3_0_0_0_1634.noarch 0:1.3.0.3.0.0.0-1634 hadoop_3_0_0_0_1634.x86_64 0:3.1.0.3.0.0.0-1634
hadoop_3_0_0_0_1634-client.x86_64 0:3.1.0.3.0.0.0-1634 hadoop_3_0_0_0_1634-hdfs.x86_64 0:3.1.0.3.0.0.0-1634
hadoop_3_0_0_0_1634-libhdfs.x86_64 0:3.1.0.3.0.0.0-1634 hadoop_3_0_0_0_1634-mapreduce.x86_64 0:3.1.0.3.0.0.0-1634
hadoop_3_0_0_0_1634-yarn.x86_64 0:3.1.0.3.0.0.0-1634 hadoop_3_1_0_0_78.x86_64 0:3.1.1.3.1.0.0-78
hadoop_3_1_0_0_78-client.x86_64 0:3.1.1.3.1.0.0-78 hadoop_3_1_0_0_78-hdfs.x86_64 0:3.1.1.3.1.0.0-78
hadoop_3_1_0_0_78-libhdfs.x86_64 0:3.1.1.3.1.0.0-78 hadoop_3_1_0_0_78-mapreduce.x86_64 0:3.1.1.3.1.0.0-78
hadoop_3_1_0_0_78-yarn.x86_64 0:3.1.1.3.1.0.0-78 hadoop_3_1_4_0_315.x86_64 0:3.1.1.3.1.4.0-315
hadoop_3_1_4_0_315-client.x86_64 0:3.1.1.3.1.4.0-315 hadoop_3_1_4_0_315-hdfs.x86_64 0:3.1.1.3.1.4.0-315
hadoop_3_1_4_0_315-libhdfs.x86_64 0:3.1.1.3.1.4.0-315 hadoop_3_1_4_0_315-mapreduce.x86_64 0:3.1.1.3.1.4.0-315
hadoop_3_1_4_0_315-yarn.x86_64 0:3.1.1.3.1.4.0-315 hbase_3_0_0_0_1634.noarch 0:2.0.0.3.0.0.0-1634
hbase_3_1_0_0_78.noarch 0:2.0.2.3.1.0.0-78 hbase_3_1_4_0_315.noarch 0:2.0.2.3.1.4.0-315
hive_3_0_0_0_1634.noarch 0:3.1.0.3.0.0.0-1634 hive_3_0_0_0_1634-hcatalog.noarch 0:3.1.0.3.0.0.0-1634
hive_3_0_0_0_1634-jdbc.noarch 0:3.1.0.3.0.0.0-1634 hive_3_0_0_0_1634-webhcat.noarch 0:3.1.0.3.0.0.0-1634
hive_3_1_0_0_78.noarch 0:3.1.0.3.1.0.0-78 hive_3_1_0_0_78-hcatalog.noarch 0:3.1.0.3.1.0.0-78
hive_3_1_0_0_78-jdbc.noarch 0:3.1.0.3.1.0.0-78 hive_3_1_4_0_315.noarch 0:3.1.0.3.1.4.0-315
hive_3_1_4_0_315-hcatalog.noarch 0:3.1.0.3.1.4.0-315 hive_3_1_4_0_315-jdbc.noarch 0:3.1.0.3.1.4.0-315
livy2_3_1_0_0_78.noarch 0:0.5.0.3.1.0.0-78 livy2_3_1_4_0_315.noarch 0:0.5.0.3.1.4.0-315
pig_3_0_0_0_1634.noarch 0:0.16.0.3.0.0.0-1634 tez_3_0_0_0_1634.noarch 0:0.9.1.3.0.0.0-1634
tez_3_1_0_0_78.noarch 0:0.9.1.3.1.0.0-78 tez_3_1_4_0_315.noarch 0:0.9.1.3.1.4.0-315

Installing package spark2_3_1_0_0_78 ('/usr/bin/yum -y install spark2_3_1_0_0_78')