Support Questions

Find answers, ask questions, and share your expertise

Why does HDP version 2.4.2.0 requires a Spark component Livy version 2.5.3.0-37? When it should take version 2_5_0_0_1245?

avatar
Explorer

During the installation of Spark through Ambari for HDP version 2.4.2.0, the installation fails because it wants a version of Livy that is not in the HDP 2.4.x repository. We have mirrored the repository because the installation is air-gapped and our HTTP proxy has limitations to not just go "anywhere".

Does anyone know the reason and or fix for an issue of this nature? Thanks in advance! Here is the error below that is generated during installing the Spark service role:

stderr: /var/lib/ambari-agent/data/errors-1116.txt
Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/SPARK/1.2.1/package/scripts/spark_client.py", line 88, in <module>
    SparkClient().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/SPARK/1.2.1/package/scripts/spark_client.py", line 37, in install
    self.install_packages(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 567, in install_packages
    retry_count=agent_stack_retry_count)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install
    self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 51, in install_package
    self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 86, in checked_call_with_retries
    return self._call_with_retries(cmd, is_checked=True, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 98, in _call_with_retries
    code, out = func(cmd, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 70, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 92, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 140, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 293, in _call
    raise ExecutionFailed(err_msg, code, out, err)
resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/yum -d 0 -e 0 -y install livy_2_5_3_0_37' returned 1. Error: Nothing to do
stdout: /var/lib/ambari-agent/data/output-1116.txt
2017-04-27 13:08:57,196 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.3.0-37
2017-04-27 13:08:57,196 - Checking if need to create versioned conf dir /etc/hadoop/2.5.3.0-37/0
2017-04-27 13:08:57,197 - call[('ambari-python-wrap', '/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2017-04-27 13:08:57,220 - call returned (1, '/etc/hadoop/2.5.3.0-37/0 exist already', '')
2017-04-27 13:08:57,221 - checked_call[('ambari-python-wrap', '/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.3.0-37', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2017-04-27 13:08:57,243 - checked_call returned (0, '')
2017-04-27 13:08:57,243 - Ensuring that hadoop has the correct symlink structure
2017-04-27 13:08:57,243 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-04-27 13:08:57,245 - Group['hadoop'] {}
2017-04-27 13:08:57,246 - Group['users'] {}
2017-04-27 13:08:57,246 - Group['spark'] {}
2017-04-27 13:08:57,246 - Group['livy'] {}
2017-04-27 13:08:57,246 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-04-27 13:08:57,247 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-04-27 13:08:57,247 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-04-27 13:08:57,248 - User['flume'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-04-27 13:08:57,248 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-04-27 13:08:57,249 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-04-27 13:08:57,250 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-04-27 13:08:57,250 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-04-27 13:08:57,251 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-04-27 13:08:57,251 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-04-27 13:08:57,252 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['users']}
2017-04-27 13:08:57,252 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-04-27 13:08:57,253 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-04-27 13:08:57,253 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-04-27 13:08:57,254 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-04-27 13:08:57,255 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-04-27 13:08:57,255 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': ['hadoop']}
2017-04-27 13:08:57,256 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-04-27 13:08:57,258 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-04-27 13:08:57,265 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-04-27 13:08:57,265 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-04-27 13:08:57,266 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-04-27 13:08:57,267 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-04-27 13:08:57,274 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-04-27 13:08:57,275 - Group['hdfs'] {}
2017-04-27 13:08:57,275 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': ['hadoop', 'hdfs']}
2017-04-27 13:08:57,276 - FS Type: 
2017-04-27 13:08:57,276 - Directory['/etc/hadoop'] {'mode': 0755}
2017-04-27 13:08:57,292 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-04-27 13:08:57,292 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-04-27 13:08:57,305 - Initializing 2 repositories
2017-04-27 13:08:57,306 - Repository['HDP-2.5'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.0.0', 'action': ['create'], 'components': ['HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-04-27 13:08:57,313 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.0.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-04-27 13:08:57,314 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6', 'action': ['create'], 'components': ['HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-04-27 13:08:57,316 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-04-27 13:08:57,317 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-04-27 13:08:57,392 - Skipping installation of existing package unzip
2017-04-27 13:08:57,392 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-04-27 13:08:57,402 - Skipping installation of existing package curl
2017-04-27 13:08:57,402 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-04-27 13:08:57,412 - Skipping installation of existing package hdp-select
2017-04-27 13:08:57,574 - Version 2.5.3.0-37 was provided as effective cluster version.  Using package version 2_5_3_0_37
2017-04-27 13:08:57,575 - Package['spark_2_5_3_0_37'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-04-27 13:08:57,646 - Skipping installation of existing package spark_2_5_3_0_37
2017-04-27 13:08:57,646 - Version 2.5.3.0-37 was provided as effective cluster version.  Using package version 2_5_3_0_37
2017-04-27 13:08:57,648 - Package['spark_2_5_3_0_37-python'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-04-27 13:08:57,657 - Skipping installation of existing package spark_2_5_3_0_37-python
2017-04-27 13:08:57,657 - Version 2.5.3.0-37 was provided as effective cluster version.  Using package version 2_5_3_0_37
2017-04-27 13:08:57,658 - Package['livy_2_5_3_0_37'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-04-27 13:08:57,668 - Installing package livy_2_5_3_0_37 ('/usr/bin/yum -d 0 -e 0 -y install livy_2_5_3_0_37')
2017-04-27 13:09:02,514 - Execution of '/usr/bin/yum -d 0 -e 0 -y install livy_2_5_3_0_37' returned 1. Error: Nothing to do
2017-04-27 13:09:02,514 - Failed to install package livy_2_5_3_0_37. Executing '/usr/bin/yum clean metadata'
2017-04-27 13:09:02,659 - Retrying to install package livy_2_5_3_0_37 after 30 seconds

Command failed after 1 tries

Regards,

Freemon

1 ACCEPTED SOLUTION

avatar
Explorer

In terms of attribution I am not certain if the repository owner is Bluedata or Hortonworks. The issue the RHEL6 repo is not accurate by default in the configuration. Caveat is I am using Ambari within Bluedata. The URL in rhel6 must be modified and joila! See steps below. Once you do this you can install Spark as a service role within your cluster with no issues.

1) Go to the Ambari UI. 2) Click on the "Admin" Menu on the top horizontal bar 3) Select the "Stack and Versions" item 4) Click on the "Versions" tab The version "HDP 2.5.3.0" will be displayed 5) Click on the hightlighted "Show Details" text. 6) Click on the Edit Icon on the righthand side of the pop window 7) Scroll down to the redhat6 portion of the window 7) Edit the "HDP" textbox. The textbox contains: http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.5.0 change it to http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0 8) Click the Save button at the bottom of the window. 9) Exit the menu with the "X" button at the top of the window.

View solution in original post

1 REPLY 1

avatar
Explorer

In terms of attribution I am not certain if the repository owner is Bluedata or Hortonworks. The issue the RHEL6 repo is not accurate by default in the configuration. Caveat is I am using Ambari within Bluedata. The URL in rhel6 must be modified and joila! See steps below. Once you do this you can install Spark as a service role within your cluster with no issues.

1) Go to the Ambari UI. 2) Click on the "Admin" Menu on the top horizontal bar 3) Select the "Stack and Versions" item 4) Click on the "Versions" tab The version "HDP 2.5.3.0" will be displayed 5) Click on the hightlighted "Show Details" text. 6) Click on the Edit Icon on the righthand side of the pop window 7) Scroll down to the redhat6 portion of the window 7) Edit the "HDP" textbox. The textbox contains: http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.5.0 change it to http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.5.3.0 8) Click the Save button at the bottom of the window. 9) Exit the menu with the "X" button at the top of the window.