Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Issue enabling Phoenix support with Custom Ambari Stack Version

avatar
Contributor

We have a HDP 2.5 cluster installed with Ambari 2.4.1.0 that includes deploying HBase which works fine, however when I come to try to enable Phoenix support through Ambari, once the change is made and nodes with HBase Client are restarted they all error when trying to install the Phoenix package, it seems to be that Ambari is adding our custom stack version into the yum install command which is causing the issue.

The custom stack version is 2.5.PROJECT and as can be seen from the below logs this is inserted into the 'yum install' command.

Any help would be greatly appreciated.

Stderr;

Traceback (most recent call last):
  File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_client.py", line 82, in <module>
    HbaseClient().execute()
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
    method(env)
  File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 680, in restart
    self.install(env)
  File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_client.py", line 37, in install
    self.configure(env)
  File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_client.py", line 42, in configure
    hbase(name='client')
  File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
    return fn(*args, **kwargs)
  File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase.py", line 219, in hbase
    retry_count=params.agent_stack_retry_count)
  File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
    self.env.run()
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
    self.run_action(resource, action)
  File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
    provider_action()
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install
    self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 49, in install_package
    self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 83, in checked_call_with_retries
    return self._call_with_retries(cmd, is_checked=True, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 91, in _call_with_retries
    code, out = func(cmd, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner
    result = function(command, **kwargs)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call
    tries=tries, try_sleep=try_sleep)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper
    result = _call(command, **kwargs_copy)
  File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call
    raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install 'phoenix_2_5_PROJECT_*'' returned 1. Error: Nothing to do

Stdout;

2016-11-21 12:34:36,007 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-21 12:34:36,009 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-21 12:34:36,011 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-21 12:34:36,042 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-21 12:34:36,042 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-21 12:34:36,079 - checked_call returned (0, '')
2016-11-21 12:34:36,079 - Ensuring that hadoop has the correct symlink structure
2016-11-21 12:34:36,079 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-21 12:34:36,294 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-21 12:34:36,297 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-21 12:34:36,299 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-21 12:34:36,333 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-21 12:34:36,333 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-21 12:34:36,360 - checked_call returned (0, '')
2016-11-21 12:34:36,361 - Ensuring that hadoop has the correct symlink structure
2016-11-21 12:34:36,361 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-21 12:34:36,362 - Group['ranger'] {}
2016-11-21 12:34:36,364 - Group['hadoop'] {}
2016-11-21 12:34:36,365 - Group['users'] {}
2016-11-21 12:34:36,365 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-21 12:34:36,366 - Modifying user zookeeper
2016-11-21 12:34:36,379 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-21 12:34:36,381 - Modifying user ams
2016-11-21 12:34:36,390 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-11-21 12:34:36,391 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'ranger']}
2016-11-21 12:34:36,392 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-21 12:34:36,392 - Modifying user hdfs
2016-11-21 12:34:36,402 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-21 12:34:36,403 - Modifying user yarn
2016-11-21 12:34:36,411 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-21 12:34:36,412 - Modifying user mapred
2016-11-21 12:34:36,423 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-21 12:34:36,423 - Modifying user hbase
2016-11-21 12:34:36,431 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-11-21 12:34:36,434 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-11-21 12:34:36,444 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-11-21 12:34:36,444 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2016-11-21 12:34:36,445 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-11-21 12:34:36,446 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-11-21 12:34:36,451 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-11-21 12:34:36,451 - Group['hdfs'] {}
2016-11-21 12:34:36,452 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', 'hdfs', u'hdfs']}
2016-11-21 12:34:36,452 - Modifying user hdfs
2016-11-21 12:34:36,464 - FS Type: 
2016-11-21 12:34:36,464 - Directory['/etc/hadoop'] {'mode': 0755}
2016-11-21 12:34:36,482 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'root', 'group': 'hadoop'}
2016-11-21 12:34:36,484 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2016-11-21 12:34:36,498 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-11-21 12:34:36,505 - Skipping Execute[('setenforce', '0')] due to only_if
2016-11-21 12:34:36,506 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2016-11-21 12:34:36,508 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2016-11-21 12:34:36,508 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2016-11-21 12:34:36,512 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'root'}
2016-11-21 12:34:36,513 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'root'}
2016-11-21 12:34:36,514 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-11-21 12:34:36,524 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2016-11-21 12:34:36,525 - Writing File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] because contents don't match
2016-11-21 12:34:36,525 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-11-21 12:34:36,526 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2016-11-21 12:34:36,530 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-11-21 12:34:36,533 - Writing File['/etc/hadoop/conf/topology_mappings.data'] because contents don't match
2016-11-21 12:34:36,534 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-11-21 12:34:36,799 - Stack Feature Version Info: stack_version=2.5.PROJECT, version=2.5.0.0-1245, current_cluster_version=2.5.0.0-1245 -> 2.5.0.0-1245
2016-11-21 12:34:36,815 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-21 12:34:36,818 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-21 12:34:36,820 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-21 12:34:36,846 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-21 12:34:36,846 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-21 12:34:36,875 - checked_call returned (0, '')
2016-11-21 12:34:36,875 - Ensuring that hadoop has the correct symlink structure
2016-11-21 12:34:36,875 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-21 12:34:36,881 - checked_call['hostid'] {}
2016-11-21 12:34:36,888 - checked_call returned (0, 'a8c00867')
2016-11-21 12:34:36,897 - Directory['/etc/hbase'] {'mode': 0755}
2016-11-21 12:34:36,898 - Directory['/usr/hdp/current/hbase-client/conf'] {'owner': 'hbase', 'group': 'hadoop', 'create_parents': True}
2016-11-21 12:34:36,899 - Directory['/tmp'] {'create_parents': True, 'mode': 0777}
2016-11-21 12:34:36,899 - Changing permission for /tmp from 1777 to 777
2016-11-21 12:34:36,899 - Directory['/tmp'] {'create_parents': True, 'cd_access': 'a'}
2016-11-21 12:34:36,900 - Execute[('chmod', '1777', u'/tmp')] {'sudo': True}
2016-11-21 12:34:36,906 - XmlConfig['hbase-site.xml'] {'owner': 'hbase', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hbase-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-11-21 12:34:36,926 - Generating config: /usr/hdp/current/hbase-client/conf/hbase-site.xml
2016-11-21 12:34:36,927 - File['/usr/hdp/current/hbase-client/conf/hbase-site.xml'] {'owner': 'hbase', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-11-21 12:34:36,972 - Writing File['/usr/hdp/current/hbase-client/conf/hbase-site.xml'] because contents don't match
2016-11-21 12:34:36,973 - XmlConfig['core-site.xml'] {'owner': 'hbase', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hbase-client/conf', 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'configurations': ...}
2016-11-21 12:34:36,987 - Generating config: /usr/hdp/current/hbase-client/conf/core-site.xml
2016-11-21 12:34:36,987 - File['/usr/hdp/current/hbase-client/conf/core-site.xml'] {'owner': 'hbase', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-11-21 12:34:37,026 - XmlConfig['hdfs-site.xml'] {'owner': 'hbase', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hbase-client/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2016-11-21 12:34:37,046 - Generating config: /usr/hdp/current/hbase-client/conf/hdfs-site.xml
2016-11-21 12:34:37,046 - File['/usr/hdp/current/hbase-client/conf/hdfs-site.xml'] {'owner': 'hbase', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-11-21 12:34:37,105 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2016-11-21 12:34:37,116 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2016-11-21 12:34:37,116 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-11-21 12:34:37,170 - XmlConfig['hbase-policy.xml'] {'owner': 'hbase', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hbase-client/conf', 'configuration_attributes': {}, 'configurations': {u'security.masterregion.protocol.acl': u'*', u'security.admin.protocol.acl': u'*', u'security.client.protocol.acl': u'*'}}
2016-11-21 12:34:37,177 - Generating config: /usr/hdp/current/hbase-client/conf/hbase-policy.xml
2016-11-21 12:34:37,178 - File['/usr/hdp/current/hbase-client/conf/hbase-policy.xml'] {'owner': 'hbase', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-11-21 12:34:37,187 - File['/usr/hdp/current/hbase-client/conf/hbase-env.sh'] {'content': InlineTemplate(...), 'owner': 'hbase', 'group': 'hadoop'}
2016-11-21 12:34:37,188 - Writing File['/usr/hdp/current/hbase-client/conf/hbase-env.sh'] because contents don't match
2016-11-21 12:34:37,188 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2016-11-21 12:34:37,191 - File['/etc/security/limits.d/hbase.conf'] {'content': Template('hbase.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2016-11-21 12:34:37,191 - TemplateConfig['/usr/hdp/current/hbase-client/conf/hadoop-metrics2-hbase.properties'] {'owner': 'hbase', 'template_tag': 'GANGLIA-RS'}
2016-11-21 12:34:37,197 - File['/usr/hdp/current/hbase-client/conf/hadoop-metrics2-hbase.properties'] {'content': Template('hadoop-metrics2-hbase.properties-GANGLIA-RS.j2'), 'owner': 'hbase', 'group': None, 'mode': None}
2016-11-21 12:34:37,198 - Writing File['/usr/hdp/current/hbase-client/conf/hadoop-metrics2-hbase.properties'] because contents don't match
2016-11-21 12:34:37,198 - TemplateConfig['/usr/hdp/current/hbase-client/conf/regionservers'] {'owner': 'hbase', 'template_tag': None}
2016-11-21 12:34:37,200 - File['/usr/hdp/current/hbase-client/conf/regionservers'] {'content': Template('regionservers.j2'), 'owner': 'hbase', 'group': None, 'mode': None}
2016-11-21 12:34:37,201 - TemplateConfig['/usr/hdp/current/hbase-client/conf/hbase_client_jaas.conf'] {'owner': 'hbase', 'template_tag': None}
2016-11-21 12:34:37,202 - File['/usr/hdp/current/hbase-client/conf/hbase_client_jaas.conf'] {'content': Template('hbase_client_jaas.conf.j2'), 'owner': 'hbase', 'group': None, 'mode': None}
2016-11-21 12:34:37,203 - File['/usr/hdp/current/hbase-client/conf/log4j.properties'] {'content': ..., 'owner': 'hbase', 'group': 'hadoop', 'mode': 0644}
2016-11-21 12:34:37,203 - Package['phoenix_2_5_PROJECT_*'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-11-21 12:34:37,315 - Installing package phoenix_2_5_PROJECT_* ('/usr/bin/yum -d 0 -e 0 -y install 'phoenix_2_5_PROJECT_*'')


Command failed after 1 tries
1 ACCEPTED SOLUTION

avatar
Contributor

To resolve this temporarily on each node I edited vi /var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/params_linux.py

Replacing;

phoenix_package = format("phoenix_{underscored_version}_*")

With;

phoenix_package = format("phoenix_*")

After this the deployment through Ambari works and I can run the examples posted here Phoenix Test Examples

View solution in original post

2 REPLIES 2

avatar
Contributor

To resolve this temporarily on each node I edited vi /var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/params_linux.py

Replacing;

phoenix_package = format("phoenix_{underscored_version}_*")

With;

phoenix_package = format("phoenix_*")

After this the deployment through Ambari works and I can run the examples posted here Phoenix Test Examples

avatar
Contributor

In some cases there is an issue with the above fix, firstly Ambari can overwrite the cache'd copy of params_linux.py with the copy held on the Ambari server itself that is held under;

/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/params_linux.py

Secondly the fix is a little bit flaky in that yum tries to install packages named phoenix_* instead of phoenix_2_5_* which was the original intention. This can be solved by instead replacing;

stack_version_unformatted = status_params.stack_version_unformatted

With;

stack_version_unformatted = status_params.stack_version_unformatted.replace('.PROJECT','')

Replace the .PROJECT with whatever the custom version used is.