Member since
10-11-2016
29
Posts
1
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1131 | 11-21-2016 02:29 PM | |
1802 | 11-04-2016 01:12 PM | |
1781 | 10-18-2016 09:21 AM | |
731 | 10-14-2016 01:05 PM |
08-07-2017
12:09 PM
For anyone that comes to this from google, as I did, you may also need to remove/backup the /hadoop/storm/supervisor/localstate directory. It seems this keeps track of what the supervisor is currently running, so if the stormdist folder doesn't exist but there is a record in localstate, it will keep trying to access that file on startup.
... View more
04-26-2017
10:47 AM
Thanks for the response Frank, I guess my question really was how to easily move these files into the correct folder structure without it being a manual process of using "hdfs dfs" commands. The including all the data in the Hive table and then let hive control what can be selected/seen is an interesting concept, that might be a possible way of doing what we are after without having to adapt the underlying structure of the data in HDFS. We can then create views on top of this single hive table to split the data and then always insert into Hive internal tables if needed.
... View more
04-25-2017
02:18 PM
I have some data being dropped into our HDFS file system on a daily basis into a single folder which contains multiple CSV files. Such as below; /data/yyyy/mm/dd/file1.csv /data/yyyy/mm/dd/file2.csv Now I want to create a Hive external table on all the file1.csv files across all the folders under /data, now it doesn't seem it is currently possible to use a regex in the Hive external table command. My next thought would be to copy the files into separate structures so Hive can parse this files individually, such as; /data/file1/yyyy/mm/dd/file1.csv /data/file2/yyyy/mm/dd/file2.csv But I am not sure what the best way of doing this would be, whatever I choose to use would initially need to copy bulk data between this folder structures and then be able to be scheduled to copy files over on a daily basis when new folders are created. Any help would be greatly appreciated, please let me know if any of the above is unclear.
... View more
Labels:
- Labels:
-
Apache Hive
12-14-2016
11:19 AM
In some cases there is an issue with the above fix, firstly Ambari can overwrite the cache'd copy of params_linux.py with the copy held on the Ambari server itself that is held under; /var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/params_linux.py Secondly the fix is a little bit flaky in that yum tries to install packages named phoenix_* instead of phoenix_2_5_* which was the original intention. This can be solved by instead replacing; stack_version_unformatted = status_params.stack_version_unformatted With; stack_version_unformatted = status_params.stack_version_unformatted.replace('.PROJECT','') Replace the .PROJECT with whatever the custom version used is.
... View more
11-21-2016
02:29 PM
To resolve this temporarily on each node I edited vi /var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/params_linux.py Replacing; phoenix_package = format("phoenix_{underscored_version}_*") With; phoenix_package = format("phoenix_*") After this the deployment through Ambari works and I can run the examples posted here Phoenix Test Examples
... View more
11-21-2016
02:26 PM
We have a HDP 2.5 cluster installed with Ambari 2.4.1.0 that includes deploying HBase which works fine, however when I come to try to enable Phoenix support through Ambari, once the change is made and nodes with HBase Client are restarted they all error when trying to install the Phoenix package, it seems to be that Ambari is adding our custom stack version into the yum install command which is causing the issue. The custom stack version is 2.5.PROJECT and as can be seen from the below logs this is inserted into the 'yum install' command. Any help would be greatly appreciated. Stderr; Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_client.py", line 82, in <module>
HbaseClient().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 680, in restart
self.install(env)
File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_client.py", line 37, in install
self.configure(env)
File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_client.py", line 42, in configure
hbase(name='client')
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase.py", line 219, in hbase
retry_count=params.agent_stack_retry_count)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install
self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 49, in install_package
self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 83, in checked_call_with_retries
return self._call_with_retries(cmd, is_checked=True, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 91, in _call_with_retries
code, out = func(cmd, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install 'phoenix_2_5_PROJECT_*'' returned 1. Error: Nothing to do
Stdout; 2016-11-21 12:34:36,007 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-21 12:34:36,009 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-21 12:34:36,011 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-21 12:34:36,042 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-21 12:34:36,042 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-21 12:34:36,079 - checked_call returned (0, '')
2016-11-21 12:34:36,079 - Ensuring that hadoop has the correct symlink structure
2016-11-21 12:34:36,079 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-21 12:34:36,294 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-21 12:34:36,297 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-21 12:34:36,299 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-21 12:34:36,333 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-21 12:34:36,333 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-21 12:34:36,360 - checked_call returned (0, '')
2016-11-21 12:34:36,361 - Ensuring that hadoop has the correct symlink structure
2016-11-21 12:34:36,361 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-21 12:34:36,362 - Group['ranger'] {}
2016-11-21 12:34:36,364 - Group['hadoop'] {}
2016-11-21 12:34:36,365 - Group['users'] {}
2016-11-21 12:34:36,365 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-21 12:34:36,366 - Modifying user zookeeper
2016-11-21 12:34:36,379 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-21 12:34:36,381 - Modifying user ams
2016-11-21 12:34:36,390 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-11-21 12:34:36,391 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'ranger']}
2016-11-21 12:34:36,392 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-21 12:34:36,392 - Modifying user hdfs
2016-11-21 12:34:36,402 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-21 12:34:36,403 - Modifying user yarn
2016-11-21 12:34:36,411 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-21 12:34:36,412 - Modifying user mapred
2016-11-21 12:34:36,423 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-21 12:34:36,423 - Modifying user hbase
2016-11-21 12:34:36,431 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-11-21 12:34:36,434 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-11-21 12:34:36,444 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-11-21 12:34:36,444 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2016-11-21 12:34:36,445 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-11-21 12:34:36,446 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-11-21 12:34:36,451 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-11-21 12:34:36,451 - Group['hdfs'] {}
2016-11-21 12:34:36,452 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', 'hdfs', u'hdfs']}
2016-11-21 12:34:36,452 - Modifying user hdfs
2016-11-21 12:34:36,464 - FS Type:
2016-11-21 12:34:36,464 - Directory['/etc/hadoop'] {'mode': 0755}
2016-11-21 12:34:36,482 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'root', 'group': 'hadoop'}
2016-11-21 12:34:36,484 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2016-11-21 12:34:36,498 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-11-21 12:34:36,505 - Skipping Execute[('setenforce', '0')] due to only_if
2016-11-21 12:34:36,506 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2016-11-21 12:34:36,508 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2016-11-21 12:34:36,508 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2016-11-21 12:34:36,512 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'root'}
2016-11-21 12:34:36,513 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'root'}
2016-11-21 12:34:36,514 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-11-21 12:34:36,524 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2016-11-21 12:34:36,525 - Writing File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] because contents don't match
2016-11-21 12:34:36,525 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-11-21 12:34:36,526 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2016-11-21 12:34:36,530 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-11-21 12:34:36,533 - Writing File['/etc/hadoop/conf/topology_mappings.data'] because contents don't match
2016-11-21 12:34:36,534 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-11-21 12:34:36,799 - Stack Feature Version Info: stack_version=2.5.PROJECT, version=2.5.0.0-1245, current_cluster_version=2.5.0.0-1245 -> 2.5.0.0-1245
2016-11-21 12:34:36,815 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-21 12:34:36,818 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-21 12:34:36,820 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-21 12:34:36,846 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-21 12:34:36,846 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-21 12:34:36,875 - checked_call returned (0, '')
2016-11-21 12:34:36,875 - Ensuring that hadoop has the correct symlink structure
2016-11-21 12:34:36,875 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-21 12:34:36,881 - checked_call['hostid'] {}
2016-11-21 12:34:36,888 - checked_call returned (0, 'a8c00867')
2016-11-21 12:34:36,897 - Directory['/etc/hbase'] {'mode': 0755}
2016-11-21 12:34:36,898 - Directory['/usr/hdp/current/hbase-client/conf'] {'owner': 'hbase', 'group': 'hadoop', 'create_parents': True}
2016-11-21 12:34:36,899 - Directory['/tmp'] {'create_parents': True, 'mode': 0777}
2016-11-21 12:34:36,899 - Changing permission for /tmp from 1777 to 777
2016-11-21 12:34:36,899 - Directory['/tmp'] {'create_parents': True, 'cd_access': 'a'}
2016-11-21 12:34:36,900 - Execute[('chmod', '1777', u'/tmp')] {'sudo': True}
2016-11-21 12:34:36,906 - XmlConfig['hbase-site.xml'] {'owner': 'hbase', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hbase-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-11-21 12:34:36,926 - Generating config: /usr/hdp/current/hbase-client/conf/hbase-site.xml
2016-11-21 12:34:36,927 - File['/usr/hdp/current/hbase-client/conf/hbase-site.xml'] {'owner': 'hbase', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-11-21 12:34:36,972 - Writing File['/usr/hdp/current/hbase-client/conf/hbase-site.xml'] because contents don't match
2016-11-21 12:34:36,973 - XmlConfig['core-site.xml'] {'owner': 'hbase', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hbase-client/conf', 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'configurations': ...}
2016-11-21 12:34:36,987 - Generating config: /usr/hdp/current/hbase-client/conf/core-site.xml
2016-11-21 12:34:36,987 - File['/usr/hdp/current/hbase-client/conf/core-site.xml'] {'owner': 'hbase', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-11-21 12:34:37,026 - XmlConfig['hdfs-site.xml'] {'owner': 'hbase', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hbase-client/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2016-11-21 12:34:37,046 - Generating config: /usr/hdp/current/hbase-client/conf/hdfs-site.xml
2016-11-21 12:34:37,046 - File['/usr/hdp/current/hbase-client/conf/hdfs-site.xml'] {'owner': 'hbase', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-11-21 12:34:37,105 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2016-11-21 12:34:37,116 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2016-11-21 12:34:37,116 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-11-21 12:34:37,170 - XmlConfig['hbase-policy.xml'] {'owner': 'hbase', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hbase-client/conf', 'configuration_attributes': {}, 'configurations': {u'security.masterregion.protocol.acl': u'*', u'security.admin.protocol.acl': u'*', u'security.client.protocol.acl': u'*'}}
2016-11-21 12:34:37,177 - Generating config: /usr/hdp/current/hbase-client/conf/hbase-policy.xml
2016-11-21 12:34:37,178 - File['/usr/hdp/current/hbase-client/conf/hbase-policy.xml'] {'owner': 'hbase', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-11-21 12:34:37,187 - File['/usr/hdp/current/hbase-client/conf/hbase-env.sh'] {'content': InlineTemplate(...), 'owner': 'hbase', 'group': 'hadoop'}
2016-11-21 12:34:37,188 - Writing File['/usr/hdp/current/hbase-client/conf/hbase-env.sh'] because contents don't match
2016-11-21 12:34:37,188 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2016-11-21 12:34:37,191 - File['/etc/security/limits.d/hbase.conf'] {'content': Template('hbase.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2016-11-21 12:34:37,191 - TemplateConfig['/usr/hdp/current/hbase-client/conf/hadoop-metrics2-hbase.properties'] {'owner': 'hbase', 'template_tag': 'GANGLIA-RS'}
2016-11-21 12:34:37,197 - File['/usr/hdp/current/hbase-client/conf/hadoop-metrics2-hbase.properties'] {'content': Template('hadoop-metrics2-hbase.properties-GANGLIA-RS.j2'), 'owner': 'hbase', 'group': None, 'mode': None}
2016-11-21 12:34:37,198 - Writing File['/usr/hdp/current/hbase-client/conf/hadoop-metrics2-hbase.properties'] because contents don't match
2016-11-21 12:34:37,198 - TemplateConfig['/usr/hdp/current/hbase-client/conf/regionservers'] {'owner': 'hbase', 'template_tag': None}
2016-11-21 12:34:37,200 - File['/usr/hdp/current/hbase-client/conf/regionservers'] {'content': Template('regionservers.j2'), 'owner': 'hbase', 'group': None, 'mode': None}
2016-11-21 12:34:37,201 - TemplateConfig['/usr/hdp/current/hbase-client/conf/hbase_client_jaas.conf'] {'owner': 'hbase', 'template_tag': None}
2016-11-21 12:34:37,202 - File['/usr/hdp/current/hbase-client/conf/hbase_client_jaas.conf'] {'content': Template('hbase_client_jaas.conf.j2'), 'owner': 'hbase', 'group': None, 'mode': None}
2016-11-21 12:34:37,203 - File['/usr/hdp/current/hbase-client/conf/log4j.properties'] {'content': ..., 'owner': 'hbase', 'group': 'hadoop', 'mode': 0644}
2016-11-21 12:34:37,203 - Package['phoenix_2_5_PROJECT_*'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-11-21 12:34:37,315 - Installing package phoenix_2_5_PROJECT_* ('/usr/bin/yum -d 0 -e 0 -y install 'phoenix_2_5_PROJECT_*'')
Command failed after 1 tries
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Phoenix
11-04-2016
01:12 PM
Finally spotted my mistake, was the SPNEGO kerberos configuration in Ambari was incorrect, I had the principal set to HTTP/auth-001@PROJECT1 instead of HTTP/_HOST@PROJECT1.
... View more
11-04-2016
09:35 AM
1 Kudo
Am creating a new cluster through Ambari using blueprints with version 2.4.1.0 of Ambari and HDP 2.5.0, the cluster is running FreeIPA and is kerberised, Ranger deploys fine and no errors are logged in the Ambari logs for the deployment of Ranger Admin or Ranger Usersync services, however when starting the namenode there are errors logged in the startup and the HDFS service is not created in Ranger Web UI. Have pasted in the relevant logs below and some of the manual commands I have run on the nodes to try to troubleshoot. Any help would be greatly appreciated. Namenode startup log stderr in Ambari; 2016-11-04 08:45:26,899 - Error in call for getting Ranger service:
No JSON object could be decoded
2016-11-04 08:54:10,812 - Error in call for creating Ranger service:
No JSON object could be decoded
2016-11-04 08:54:10,813 - Hdfs Repository creation failed in Ranger admin Namenode startup log stdout in Ambari; 2016-11-04 08:44:53,766 - checked_call['/usr/bin/kinit -c /var/lib/ambari-agent/tmp/curl_krb_cache/ranger_admin_calls_hdfs_cc_7b6e79b8fdca257bc6249b42083c151b -kt /etc/security/keytabs/nn.service.keytab nn/-nn-001.project1@PROJECT1 > /dev/null'] {'user': 'hdfs'}
2016-11-04 08:44:53,855 - checked_call returned (0, '')
2016-11-04 08:44:53,856 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -L -k --negotiate -u : -b /var/lib/ambari-agent/tmp/cookies/710d18ea-f3ae-44d0-804f-b7111ab429e6 -c /var/lib/ambari-agent/tmp/cookies/710d18ea-f3ae-44d0-804f-b7111ab429e6 -w '"'"'%{http_code}'"'"' http://auth-001.project1:6080/login.jsp --connect-timeout 10 --max-time 12 -o /dev/null 1>/tmp/tmppqhoiG 2>/tmp/tmpLIiD5C''] {'quiet': False, 'env': {'KRB5CCNAME': '/var/lib/ambari-agent/tmp/curl_krb_cache/ranger_admin_calls_hdfs_cc_7b6e79b8fdca257bc6249b42083c151b'}}
2016-11-04 08:44:53,924 - call returned (0, '')
2016-11-04 08:44:53,925 - call['/usr/bin/klist -s /var/lib/ambari-agent/tmp/curl_krb_cache/ranger_admin_calls_hdfs_cc_7b6e79b8fdca257bc6249b42083c151b'] {'user': 'hdfs'}
2016-11-04 08:44:53,980 - call returned (0, '')
2016-11-04 08:44:53,980 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -L -k --negotiate -u : -b /var/lib/ambari-agent/tmp/cookies/3dbe7f89-811d-4dc5-be44-1dac2a6ac2aa -c /var/lib/ambari-agent/tmp/cookies/3dbe7f89-811d-4dc5-be44-1dac2a6ac2aa '"'"'http://auth-001.project1:6080/service/public/v2/api/service?serviceName=PROJECT1_Cluster_hadoop&serviceType=hdfs&isEnabled=true'"'"' --connect-timeout 10 --max-time 12 -X GET 1>/tmp/tmpAMnDmH 2>/tmp/tmp6PLCo5''] {'quiet': False, 'env': {'KRB5CCNAME': '/var/lib/ambari-agent/tmp/curl_krb_cache/ranger_admin_calls_hdfs_cc_7b6e79b8fdca257bc6249b42083c151b'}}
2016-11-04 08:44:54,054 - call returned (0, '')
2016-11-04 08:44:54,055 - Will retry 4 time(s), caught exception: Error in call for getting Ranger service:
No JSON object could be decoded. Sleeping for 8 sec(s)
xa_portal.log from Ranger admin machine auth-001 2016-11-04 08:54:10,828 [http-bio-6080-exec-5] WARN apache.ranger.security.web.filter.RangerKrbFilter (RangerKrbFilter.java:494) - Authentication exception: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos credentails)
org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos credentails)
at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:400)
at org.apache.ranger.security.web.filter.RangerKrbFilter.doFilter(RangerKrbFilter.java:449)
at org.apache.ranger.security.web.filter.RangerKRBAuthenticationFilter.doFilter(RangerKRBAuthenticationFilter.java:285)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:54)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:45)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.apache.ranger.security.web.filter.RangerSSOAuthenticationFilter.doFilter(RangerSSOAuthenticationFilter.java:211)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilter(BasicAuthenticationFilter.java:150)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:183)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:105)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:259)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:169)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:956)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:436)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1078)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:625)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos credentails)
at sun.security.jgss.krb5.Krb5AcceptCredential.getInstance(Krb5AcceptCredential.java:87)
at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:127)
at sun.security.jgss.GSSManagerImpl.getCredentialElement(GSSManagerImpl.java:193)
at sun.security.jgss.spnego.SpNegoMechFactory.getCredentialElement(SpNegoMechFactory.java:142)
at sun.security.jgss.GSSManagerImpl.getCredentialElement(GSSManagerImpl.java:193)
at sun.security.jgss.GSSCredentialImpl.add(GSSCredentialImpl.java:427)
at sun.security.jgss.GSSCredentialImpl.<init>(GSSCredentialImpl.java:77)
at sun.security.jgss.GSSManagerImpl.createCredential(GSSManagerImpl.java:160)
at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:357)
at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:349)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:349)
... 38 more
Manual klist of kerberos keytab cache used by Ambari on nn-001; /usr/bin/klist /var/lib/ambari-agent/tmp/curl_krb_cache/ranger_admin_calls_hdfs_cc_7b6e79b8fdca257bc6249b42083c151b
Ticket cache: FILE:/var/lib/ambari-agent/tmp/curl_krb_cache/ranger_admin_calls_hdfs_cc_7b6e79b8fdca257bc6249b42083c151b
Default principal: nn/nn-001.project1@PROJECT1
Valid starting Expires Service principal
04/11/16 08:54:10 05/11/16 08:54:10 krbtgt/PROJECT1@PROJECT1
04/11/16 08:54:10 05/11/16 08:54:10 HTTP/auth-001.project1@PROJECT1
Manual run of curl command used by Ambari to query Ranger services on nn-001; curl -L -k --negotiate -u : -b /var/lib/ambari-agent/tmp/cookies/3dbe7f89-811d-4dc5-be44-1dac2a6ac2aa -c /var/lib/ambari-agent/tmp/cookies/3dbe7f89-811d-4dc5-be44-1dac2a6ac2aa 'http://auth-001.project1:6080/service/public/v2/api/service?serviceName=PROJECT1_Cluster_hadoop&serviceType=hdfs&isEnabled=true' --connect-timeout 10 --max-time 12 -X GET
<html><head><title>Apache Tomcat/7.0.68 - Error report</title><style><!--H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}A.name {color : black;}HR {color : #525D76;}--></style> </head><body><h1>HTTP Status 403 - GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos credentails)</h1><HR size="1" noshade="noshade"><p><b>type</b> Status report</p><p><b>message</b> <u>GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos credentails)</u></p><p><b>description</b> <u>Access to the specified resource has been forbidden.</u></p><HR size="1" noshade="noshade"><h3>Apache Tomcat/7.0.68</h3></body></html>
Blueprint is configued to set the xasecure.audit.jaas.Client.option.keyTab to /etc/security/keytabs/rangeradmin.service.keytab and the principal to rangeradmin/_HOST@PROJECT1 klist -kt /etc/security/keytabs/rangeradmin.service.keytab
Keytab name: FILE:/etc/security/keytabs/rangeradmin.service.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 03/11/16 16:47:02 rangeradmin/auth-001.project1@PROJECT1
1 03/11/16 16:47:02 rangeradmin/auth-001.project1@PROJECT1
1 03/11/16 16:47:02 rangeradmin/auth-001.project1@PROJECT1
1 03/11/16 16:47:02 rangeradmin/auth-001.project1@PROJECT1
... View more
Labels:
10-18-2016
09:21 AM
It seems the version of Metron I was using had some inconsistencies with the fieldTransformation being used in the enrichment config, it didn't recognise STELLAR as the transformation language. I downloaded the latest version of the source code (0.2.1BETA instead of 0.2.0BETA), followed the original process for building the full cluster and configuring Metron to add a telemetry source, after this I could then follow the steps to add the threat intel configuration and it is now enriching the data correctly. Thanks for all your help along the way @cduby it has been very much appreciated.
... View more
10-18-2016
07:42 AM
@cduby Apologies I wasn't very clear in my description of the problem, the threat intelligence has loaded into HBase fine, I can scan the 'threatintel' table in HBase and it returns the csv threat data that was uploaded. However when I ingest squid logs, they still look identical to before I added the threat enrichment to the squid enrichment config.
... View more