<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Ambari Fails to install Oozie in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Ambari-Fails-to-install-Oozie/m-p/201534#M62541</link>
    <description>&lt;P&gt;Hi, I have 5 openstack nodes. one as ambari-server and the other four are my agents.&lt;/P&gt;&lt;P&gt;in deployment step of creating cluster, all services and slaves are installed in 3 nodes, except one.&lt;/P&gt;&lt;P&gt;that one fails in "installing oozie". I checked logs and failure is due to falcon.&lt;/P&gt;&lt;P&gt;I tried to install it manually by "yum install falcon", but the same error happens.&lt;/P&gt;&lt;P&gt;here is stderr:&lt;/P&gt;&lt;PRE&gt;Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_client.py", line 76, in &amp;lt;module&amp;gt;
    OozieClient().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_client.py", line 37, in install
    self.install_packages(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 567, in install_packages
    retry_count=agent_stack_retry_count)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install
    self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 49, in install_package
    self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 83, in checked_call_with_retries
    return self._call_with_retries(cmd, is_checked=True, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 91, in _call_with_retries
    code, out = func(cmd, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install falcon_2_5_3_0_37' returned 1. There are unfinished transactions remaining. You might consider running yum-complete-transaction, or "yum-complete-transaction --cleanup-only" and "yum history redo last", first to finish them. If those don't work you'll have to try removing/installing packages by hand (maybe package-cleanup can help).
No Presto metadata available for HDP-2.5
/usr/bin/install: invalid user 'falcon'
/usr/bin/install: invalid user 'falcon'
error: %pre(falcon_2_5_3_0_37-0.10.0.2.5.3.0-37.el6.noarch) scriptlet failed, exit status 1
Error in PREIN scriptlet in rpm package falcon_2_5_3_0_37-0.10.0.2.5.3.0-37.el6.noarch&lt;/PRE&gt;&lt;P&gt;and stdout:&lt;/P&gt;&lt;PRE&gt;2017-06-06 14:40:50,770 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-06-06 14:40:50,771 - Group['livy'] {}
2017-06-06 14:40:50,772 - Group['spark'] {}
2017-06-06 14:40:50,772 - Group['zeppelin'] {}
2017-06-06 14:40:50,773 - Group['hadoop'] {}
2017-06-06 14:40:50,773 - Group['users'] {}
2017-06-06 14:40:50,773 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,774 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,774 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,775 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,776 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-06-06 14:40:50,776 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,777 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-06-06 14:40:50,778 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,778 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,779 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,780 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-06-06 14:40:50,780 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,781 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,781 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,782 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,783 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,784 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,785 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,785 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-06-06 14:40:50,787 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-06-06 14:40:50,792 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-06-06 14:40:50,793 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-06-06 14:40:50,794 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-06-06 14:40:50,795 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-06-06 14:40:50,799 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-06-06 14:40:50,800 - Group['hdfs'] {}
2017-06-06 14:40:50,800 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-06-06 14:40:50,801 - FS Type: 
2017-06-06 14:40:50,801 - Directory['/etc/hadoop'] {'mode': 0755}
2017-06-06 14:40:50,813 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-06-06 14:40:50,814 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-06-06 14:40:50,835 - Initializing 2 repositories
2017-06-06 14:40:50,836 - Repository['HDP-2.5'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-06-06 14:40:50,842 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-06-06 14:40:50,843 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-06-06 14:40:50,845 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-06-06 14:40:50,845 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:50,929 - Skipping installation of existing package unzip
2017-06-06 14:40:50,930 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:50,945 - Skipping installation of existing package curl
2017-06-06 14:40:50,945 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:50,959 - Skipping installation of existing package hdp-select
2017-06-06 14:40:51,735 - Package['zip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:51,811 - Skipping installation of existing package zip
2017-06-06 14:40:51,812 - Package['extjs'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:51,825 - Skipping installation of existing package extjs
2017-06-06 14:40:51,826 - Package['oozie_2_5_3_0_37'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:51,839 - Skipping installation of existing package oozie_2_5_3_0_37
2017-06-06 14:40:51,840 - Package['falcon_2_5_3_0_37'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:51,853 - Installing package falcon_2_5_3_0_37 ('/usr/bin/yum -d 0 -e 0 -y install falcon_2_5_3_0_37')

Command failed after 1 tries&lt;/PRE&gt;&lt;P&gt;any idea?&lt;/P&gt;&lt;P&gt;also I did:&lt;/P&gt;&lt;PRE&gt;yum-complete-transaction --cleanup-olny

yum erase falcon

yum install falcon&lt;/PRE&gt;&lt;P&gt;but the same error happened again.&lt;/P&gt;&lt;P&gt;then I downloaded falcon from git and built it with maven, but when I type "falcon" in command line, it does not know it.&lt;/P&gt;&lt;P&gt; now ambari retry gives me timeout.&lt;/P&gt;&lt;PRE&gt;Python script has been killed due to timeout after waiting 1800 secs&lt;/PRE&gt;</description>
    <pubDate>Thu, 08 Jun 2017 04:15:12 GMT</pubDate>
    <dc:creator>alizadeh_uut1</dc:creator>
    <dc:date>2017-06-08T04:15:12Z</dc:date>
    <item>
      <title>Ambari Fails to install Oozie</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Ambari-Fails-to-install-Oozie/m-p/201534#M62541</link>
      <description>&lt;P&gt;Hi, I have 5 openstack nodes. one as ambari-server and the other four are my agents.&lt;/P&gt;&lt;P&gt;in deployment step of creating cluster, all services and slaves are installed in 3 nodes, except one.&lt;/P&gt;&lt;P&gt;that one fails in "installing oozie". I checked logs and failure is due to falcon.&lt;/P&gt;&lt;P&gt;I tried to install it manually by "yum install falcon", but the same error happens.&lt;/P&gt;&lt;P&gt;here is stderr:&lt;/P&gt;&lt;PRE&gt;Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_client.py", line 76, in &amp;lt;module&amp;gt;
    OozieClient().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
    method(env)
  File "/var/lib/ambari-agent/cache/common-services/OOZIE/4.0.0.2.0/package/scripts/oozie_client.py", line 37, in install
    self.install_packages(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 567, in install_packages
    retry_count=agent_stack_retry_count)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install
    self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 49, in install_package
    self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 83, in checked_call_with_retries
    return self._call_with_retries(cmd, is_checked=True, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 91, in _call_with_retries
    code, out = func(cmd, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install falcon_2_5_3_0_37' returned 1. There are unfinished transactions remaining. You might consider running yum-complete-transaction, or "yum-complete-transaction --cleanup-only" and "yum history redo last", first to finish them. If those don't work you'll have to try removing/installing packages by hand (maybe package-cleanup can help).
No Presto metadata available for HDP-2.5
/usr/bin/install: invalid user 'falcon'
/usr/bin/install: invalid user 'falcon'
error: %pre(falcon_2_5_3_0_37-0.10.0.2.5.3.0-37.el6.noarch) scriptlet failed, exit status 1
Error in PREIN scriptlet in rpm package falcon_2_5_3_0_37-0.10.0.2.5.3.0-37.el6.noarch&lt;/PRE&gt;&lt;P&gt;and stdout:&lt;/P&gt;&lt;PRE&gt;2017-06-06 14:40:50,770 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2017-06-06 14:40:50,771 - Group['livy'] {}
2017-06-06 14:40:50,772 - Group['spark'] {}
2017-06-06 14:40:50,772 - Group['zeppelin'] {}
2017-06-06 14:40:50,773 - Group['hadoop'] {}
2017-06-06 14:40:50,773 - Group['users'] {}
2017-06-06 14:40:50,773 - User['hive'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,774 - User['storm'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,774 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,775 - User['infra-solr'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,776 - User['oozie'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-06-06 14:40:50,776 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,777 - User['tez'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-06-06 14:40:50,778 - User['zeppelin'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,778 - User['livy'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,779 - User['spark'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,780 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2017-06-06 14:40:50,780 - User['kafka'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,781 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,781 - User['sqoop'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,782 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,783 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,784 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,785 - User['hcat'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2017-06-06 14:40:50,785 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-06-06 14:40:50,787 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2017-06-06 14:40:50,792 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2017-06-06 14:40:50,793 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2017-06-06 14:40:50,794 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2017-06-06 14:40:50,795 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2017-06-06 14:40:50,799 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2017-06-06 14:40:50,800 - Group['hdfs'] {}
2017-06-06 14:40:50,800 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', u'hdfs']}
2017-06-06 14:40:50,801 - FS Type: 
2017-06-06 14:40:50,801 - Directory['/etc/hadoop'] {'mode': 0755}
2017-06-06 14:40:50,813 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'hdfs', 'group': 'hadoop'}
2017-06-06 14:40:50,814 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2017-06-06 14:40:50,835 - Initializing 2 repositories
2017-06-06 14:40:50,836 - Repository['HDP-2.5'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0', 'action': ['create'], 'components': [u'HDP', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP', 'mirror_list': None}
2017-06-06 14:40:50,842 - File['/etc/yum.repos.d/HDP.repo'] {'content': '[HDP-2.5]\nname=HDP-2.5\nbaseurl=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-06-06 14:40:50,843 - Repository['HDP-UTILS-1.1.0.21'] {'base_url': 'http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7', 'action': ['create'], 'components': [u'HDP-UTILS', 'main'], 'repo_template': '[{{repo_id}}]\nname={{repo_id}}\n{% if mirror_list %}mirrorlist={{mirror_list}}{% else %}baseurl={{base_url}}{% endif %}\n\npath=/\nenabled=1\ngpgcheck=0', 'repo_file_name': 'HDP-UTILS', 'mirror_list': None}
2017-06-06 14:40:50,845 - File['/etc/yum.repos.d/HDP-UTILS.repo'] {'content': '[HDP-UTILS-1.1.0.21]\nname=HDP-UTILS-1.1.0.21\nbaseurl=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7\n\npath=/\nenabled=1\ngpgcheck=0'}
2017-06-06 14:40:50,845 - Package['unzip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:50,929 - Skipping installation of existing package unzip
2017-06-06 14:40:50,930 - Package['curl'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:50,945 - Skipping installation of existing package curl
2017-06-06 14:40:50,945 - Package['hdp-select'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:50,959 - Skipping installation of existing package hdp-select
2017-06-06 14:40:51,735 - Package['zip'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:51,811 - Skipping installation of existing package zip
2017-06-06 14:40:51,812 - Package['extjs'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:51,825 - Skipping installation of existing package extjs
2017-06-06 14:40:51,826 - Package['oozie_2_5_3_0_37'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:51,839 - Skipping installation of existing package oozie_2_5_3_0_37
2017-06-06 14:40:51,840 - Package['falcon_2_5_3_0_37'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2017-06-06 14:40:51,853 - Installing package falcon_2_5_3_0_37 ('/usr/bin/yum -d 0 -e 0 -y install falcon_2_5_3_0_37')

Command failed after 1 tries&lt;/PRE&gt;&lt;P&gt;any idea?&lt;/P&gt;&lt;P&gt;also I did:&lt;/P&gt;&lt;PRE&gt;yum-complete-transaction --cleanup-olny

yum erase falcon

yum install falcon&lt;/PRE&gt;&lt;P&gt;but the same error happened again.&lt;/P&gt;&lt;P&gt;then I downloaded falcon from git and built it with maven, but when I type "falcon" in command line, it does not know it.&lt;/P&gt;&lt;P&gt; now ambari retry gives me timeout.&lt;/P&gt;&lt;PRE&gt;Python script has been killed due to timeout after waiting 1800 secs&lt;/PRE&gt;</description>
      <pubDate>Thu, 08 Jun 2017 04:15:12 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Ambari-Fails-to-install-Oozie/m-p/201534#M62541</guid>
      <dc:creator>alizadeh_uut1</dc:creator>
      <dc:date>2017-06-08T04:15:12Z</dc:date>
    </item>
    <item>
      <title>Re: Ambari Fails to install Oozie</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Ambari-Fails-to-install-Oozie/m-p/201535#M62542</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/13184/alizadehuut1.html" nodeid="13184"&gt;@Sara Alizadeh&lt;/A&gt;&lt;/P&gt;&lt;P&gt;Are you able to access the repository from the host where you are trying to install Falcon?&lt;/P&gt;&lt;PRE&gt;wget &lt;A href="http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0/hdp.repo" target="_blank"&gt;http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.5.3.0/hdp.repo&lt;/A&gt;&lt;/PRE&gt;&lt;P&gt;.&lt;/P&gt;&lt;P&gt;- I am suspecting that the "There are unfinished transactions remaining." message is appearing because of some old yum transaction was going on.  Are you getting the same message every time you are trying to install falcon?&lt;/P&gt;&lt;P&gt;- Regarding the timeout message i am suspecting that it might be due to slow N/W or Are you using any Proxy on your host or at the yum level?  Which might be causing the slow installation and ultimately the agent timeout (because by default ambari agent takes 1800 seconds to complete the task, Please see &lt;STRONG&gt;agent.package.install.task.timeout=1800&lt;/STRONG&gt; in &lt;STRONG&gt;"/etc/ambari-server/conf/ambari.properties".    &lt;/STRONG&gt;The mentioned time is usually enough, But if you have any N/W issue (internet slowness) or proxy server issue then you might see the timeout.&lt;STRONG&gt;
&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 08 Jun 2017 08:02:55 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Ambari-Fails-to-install-Oozie/m-p/201535#M62542</guid>
      <dc:creator>jsensharma</dc:creator>
      <dc:date>2017-06-08T08:02:55Z</dc:date>
    </item>
    <item>
      <title>Re: Ambari Fails to install Oozie</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Ambari-Fails-to-install-Oozie/m-p/201536#M62543</link>
      <description>&lt;P&gt;Since error mg contains "invalid user: falcon", I tried to create user falcon manually:&lt;/P&gt;&lt;PRE&gt;adduser -g falcon falcon&lt;/PRE&gt;&lt;P&gt;but there was an error about /etc/gshadow.lock.&lt;/P&gt;&lt;P&gt;I figured out that there was a uncomplete try of creating falcon user, it was not successful and gshadow.lock was created but not deleted.(normally it must be deleted after creating a user). So:&lt;/P&gt;&lt;PRE&gt;rm /etc/gshadow.lock

yum install falcon&lt;/PRE&gt;&lt;P&gt;And the problem is gone!&lt;/P&gt;</description>
      <pubDate>Thu, 08 Jun 2017 16:11:57 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Ambari-Fails-to-install-Oozie/m-p/201536#M62543</guid>
      <dc:creator>alizadeh_uut1</dc:creator>
      <dc:date>2017-06-08T16:11:57Z</dc:date>
    </item>
  </channel>
</rss>

