Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Ambari Fails to install Oozie

avatar
Rising Star

Hi, I have 5 openstack nodes. one as ambari-server and the other four are my agents.

in deployment step of creating cluster, all services and slaves are installed in 3 nodes, except one.

that one fails in "installing oozie". I checked logs and failure is due to falcon.

I tried to install it manually by "yum install falcon", but the same error happens.

here is stderr:

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_client.py", line 76, in <module>
    OozieClient().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_client.py", line 37, in install
    self.install_packages(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 567, in install_packages
    retry_count=agent_stack_retry_count)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install
    self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 49, in install_package
    self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 83, in checked_call_with_retries
    return self._call_with_retries(cmd, is_checked=True, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 91, in _call_with_retries
    code, out = func(cmd, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install falcon_2_5_3_0_37' returned 1. There are unfinished transactions remaining. You might consider running yum-complete-transaction, or "yum-complete-transaction --cleanup-only" and "yum history redo last", first to finish them. If those don't work you'll have to try removing/installing packages by hand (maybe package-cleanup can help).
No Presto metadata available for HDP-2.5
/usr/bin/install: invalid user 'falcon'
/usr/bin/install: invalid user 'falcon'
error: %pre(falcon_2_5_3_0_37-0.10.0.2.5.3.0-37.el6.noarch) scriptlet failed, exit status 1
Error in PREIN scriptlet in rpm package falcon_2_5_3_0_37-0.10.0.2.5.3.0-37.el6.noarch

and stdout:

2017-06-06 14:40:50,770 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-06-06 14:40:50,771 - Group['livy'] {}
2017-06-06 14:40:50,772 - Group['spark'] {}
2017-06-06 14:40:50,772 - Group['zeppelin'] {}
2017-06-06 14:40:50,773 - Group['hadoop'] {}
2017-06-06 14:40:50,773 - Group['users'] {}
2017-06-06 14:40:50,773 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,774 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,774 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,775 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,776 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-06-06 14:40:50,776 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,777 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-06-06 14:40:50,778 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,778 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,779 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,780 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-06-06 14:40:50,780 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,781 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,781 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,782 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,783 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,784 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,785 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,785 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-06-06 14:40:50,787 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-06-06 14:40:50,792 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-06-06 14:40:50,793 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-06-06 14:40:50,794 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-06-06 14:40:50,795 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-06-06 14:40:50,799 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-06-06 14:40:50,800 - Group['hdfs'] {}
2017-06-06 14:40:50,800 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-06-06 14:40:50,801 - FS Type: 
2017-06-06 14:40:50,801 - Directory['/etc/hadoop'] {'mode': 0755}
2017-06-06 14:40:50,813 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-06-06 14:40:50,814 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-06-06 14:40:50,835 - Initializing 2 repositories
2017-06-06 14:40:50,836 - Repository['HDP-2.5'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-06-06 14:40:50,842 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-06-06 14:40:50,843 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-06-06 14:40:50,845 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-06-06 14:40:50,845 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:50,929 - Skipping installation of existing package unzip
2017-06-06 14:40:50,930 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:50,945 - Skipping installation of existing package curl
2017-06-06 14:40:50,945 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:50,959 - Skipping installation of existing package hdp-select
2017-06-06 14:40:51,735 - Package['zip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:51,811 - Skipping installation of existing package zip
2017-06-06 14:40:51,812 - Package['extjs'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:51,825 - Skipping installation of existing package extjs
2017-06-06 14:40:51,826 - Package['oozie_2_5_3_0_37'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:51,839 - Skipping installation of existing package oozie_2_5_3_0_37
2017-06-06 14:40:51,840 - Package['falcon_2_5_3_0_37'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:51,853 - Installing package falcon_2_5_3_0_37 ('/usr/bin/yum -d 0 -e 0 -y install falcon_2_5_3_0_37')

Command failed after 1 tries

any idea?

also I did:

yum-complete-transaction --cleanup-olny

yum erase falcon

yum install falcon

but the same error happened again.

then I downloaded falcon from git and built it with maven, but when I type "falcon" in command line, it does not know it.

now ambari retry gives me timeout.

Python script has been killed due to timeout after waiting 1800 secs
1 ACCEPTED SOLUTION

avatar
Rising Star

Since error mg contains "invalid user: falcon", I tried to create user falcon manually:

adduser -g falcon falcon

but there was an error about /etc/gshadow.lock.

I figured out that there was a uncomplete try of creating falcon user, it was not successful and gshadow.lock was created but not deleted.(normally it must be deleted after creating a user). So:

rm /etc/gshadow.lock

yum install falcon

And the problem is gone!

View solution in original post

2 REPLIES 2

avatar
Master Mentor

@Sara Alizadeh

Are you able to access the repository from the host where you are trying to install Falcon?

wget http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0/hdp.repo

.

- I am suspecting that the "There are unfinished transactions remaining." message is appearing because of some old yum transaction was going on. Are you getting the same message every time you are trying to install falcon?

- Regarding the timeout message i am suspecting that it might be due to slow N/W or Are you using any Proxy on your host or at the yum level? Which might be causing the slow installation and ultimately the agent timeout (because by default ambari agent takes 1800 seconds to complete the task, Please see agent.package.install.task.timeout=1800 in "/etc/ambari-server/conf/ambari.properties". The mentioned time is usually enough, But if you have any N/W issue (internet slowness) or proxy server issue then you might see the timeout.

avatar
Rising Star

Since error mg contains "invalid user: falcon", I tried to create user falcon manually:

adduser -g falcon falcon

but there was an error about /etc/gshadow.lock.

I figured out that there was a uncomplete try of creating falcon user, it was not successful and gshadow.lock was created but not deleted.(normally it must be deleted after creating a user). So:

rm /etc/gshadow.lock

yum install falcon

And the problem is gone!