Member since
10-11-2016
29
Posts
1
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
421 | 11-21-2016 02:29 PM | |
838 | 11-04-2016 01:12 PM | |
824 | 10-18-2016 09:21 AM | |
266 | 10-14-2016 01:05 PM |
08-07-2017
12:09 PM
For anyone that comes to this from google, as I did, you may also need to remove/backup the /hadoop/storm/supervisor/localstate directory. It seems this keeps track of what the supervisor is currently running, so if the stormdist folder doesn't exist but there is a record in localstate, it will keep trying to access that file on startup.
... View more
08-07-2017
11:34 AM
I am having issues with changing Kerberos settings in Ambari 2.5.1.0 and hoping someone can help. The Kerberos principal for the Ambari Smoke User, is set incorrectly in the Ambari Web UI to ${cluster-env/smokeuser}${principal_suffix}@${realm}. In the keytab the actual principal is named ${cluster-env/smokeuser}@${realm}. I have tried both removing the ${principal_suffix} section in Ambari Smoke User principal and also setting the principal_suffix to blank. However once I save the edits and then refresh the page, the configuration is back. Looking in developer tools in Chrome, when the save button is pressed I get a 404 response from the following page, but this isn't showing in Ambari and looks to the user like it worked correctly. http://ambari-server:8082/api/v1/clusters/<<cluster_name>>/artifacts/kerberos_descriptor
... View more
Labels:
- Labels:
-
Apache Ambari
04-26-2017
10:47 AM
Thanks for the response Frank, I guess my question really was how to easily move these files into the correct folder structure without it being a manual process of using "hdfs dfs" commands. The including all the data in the Hive table and then let hive control what can be selected/seen is an interesting concept, that might be a possible way of doing what we are after without having to adapt the underlying structure of the data in HDFS. We can then create views on top of this single hive table to split the data and then always insert into Hive internal tables if needed.
... View more
04-25-2017
02:18 PM
I have some data being dropped into our HDFS file system on a daily basis into a single folder which contains multiple CSV files. Such as below; /data/yyyy/mm/dd/file1.csv /data/yyyy/mm/dd/file2.csv Now I want to create a Hive external table on all the file1.csv files across all the folders under /data, now it doesn't seem it is currently possible to use a regex in the Hive external table command. My next thought would be to copy the files into separate structures so Hive can parse this files individually, such as; /data/file1/yyyy/mm/dd/file1.csv /data/file2/yyyy/mm/dd/file2.csv But I am not sure what the best way of doing this would be, whatever I choose to use would initially need to copy bulk data between this folder structures and then be able to be scheduled to copy files over on a daily basis when new folders are created. Any help would be greatly appreciated, please let me know if any of the above is unclear.
... View more
- Tags:
- Data Processing
- Hive
Labels:
- Labels:
-
Apache Hive
12-14-2016
11:19 AM
In some cases there is an issue with the above fix, firstly Ambari can overwrite the cache'd copy of params_linux.py with the copy held on the Ambari server itself that is held under; /var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/params_linux.py Secondly the fix is a little bit flaky in that yum tries to install packages named phoenix_* instead of phoenix_2_5_* which was the original intention. This can be solved by instead replacing; stack_version_unformatted = status_params.stack_version_unformatted With; stack_version_unformatted = status_params.stack_version_unformatted.replace('.PROJECT','') Replace the .PROJECT with whatever the custom version used is.
... View more
11-21-2016
02:29 PM
To resolve this temporarily on each node I edited vi /var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/params_linux.py Replacing; phoenix_package = format("phoenix_{underscored_version}_*") With; phoenix_package = format("phoenix_*") After this the deployment through Ambari works and I can run the examples posted here Phoenix Test Examples
... View more
11-21-2016
02:26 PM
We have a HDP 2.5 cluster installed with Ambari 2.4.1.0 that includes deploying HBase which works fine, however when I come to try to enable Phoenix support through Ambari, once the change is made and nodes with HBase Client are restarted they all error when trying to install the Phoenix package, it seems to be that Ambari is adding our custom stack version into the yum install command which is causing the issue. The custom stack version is 2.5.PROJECT and as can be seen from the below logs this is inserted into the 'yum install' command. Any help would be greatly appreciated. Stderr; Traceback (most recent call last):
File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_client.py", line 82, in <module>
HbaseClient().execute()
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 280, in execute
method(env)
File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 680, in restart
self.install(env)
File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_client.py", line 37, in install
self.configure(env)
File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase_client.py", line 42, in configure
hbase(name='client')
File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk
return fn(*args, **kwargs)
File "/var/lib/ambari-agent/cache/common-services/HBASE/0.96.0.2.0/package/scripts/hbase.py", line 219, in hbase
retry_count=params.agent_stack_retry_count)
File "/usr/lib/python2.6/site-packages/resource_management/core/base.py", line 155, in __init__
self.env.run()
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 160, in run
self.run_action(resource, action)
File "/usr/lib/python2.6/site-packages/resource_management/core/environment.py", line 124, in run_action
provider_action()
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 54, in action_install
self.install_package(package_name, self.resource.use_repos, self.resource.skip_repos)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/yumrpm.py", line 49, in install_package
self.checked_call_with_retries(cmd, sudo=True, logoutput=self.get_logoutput())
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 83, in checked_call_with_retries
return self._call_with_retries(cmd, is_checked=True, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/providers/package/__init__.py", line 91, in _call_with_retries
code, out = func(cmd, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 71, in inner
result = function(command, **kwargs)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 93, in checked_call
tries=tries, try_sleep=try_sleep)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 141, in _call_wrapper
result = _call(command, **kwargs_copy)
File "/usr/lib/python2.6/site-packages/resource_management/core/shell.py", line 294, in _call
raise Fail(err_msg)
resource_management.core.exceptions.Fail: Execution of '/usr/bin/yum -d 0 -e 0 -y install 'phoenix_2_5_PROJECT_*'' returned 1. Error: Nothing to do
Stdout; 2016-11-21 12:34:36,007 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-21 12:34:36,009 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-21 12:34:36,011 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-21 12:34:36,042 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-21 12:34:36,042 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-21 12:34:36,079 - checked_call returned (0, '')
2016-11-21 12:34:36,079 - Ensuring that hadoop has the correct symlink structure
2016-11-21 12:34:36,079 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-21 12:34:36,294 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-21 12:34:36,297 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-21 12:34:36,299 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-21 12:34:36,333 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-21 12:34:36,333 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-21 12:34:36,360 - checked_call returned (0, '')
2016-11-21 12:34:36,361 - Ensuring that hadoop has the correct symlink structure
2016-11-21 12:34:36,361 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-21 12:34:36,362 - Group['ranger'] {}
2016-11-21 12:34:36,364 - Group['hadoop'] {}
2016-11-21 12:34:36,365 - Group['users'] {}
2016-11-21 12:34:36,365 - User['zookeeper'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-21 12:34:36,366 - Modifying user zookeeper
2016-11-21 12:34:36,379 - User['ams'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-21 12:34:36,381 - Modifying user ams
2016-11-21 12:34:36,390 - User['ambari-qa'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'users']}
2016-11-21 12:34:36,391 - User['ranger'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'ranger']}
2016-11-21 12:34:36,392 - User['hdfs'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-21 12:34:36,392 - Modifying user hdfs
2016-11-21 12:34:36,402 - User['yarn'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-21 12:34:36,403 - Modifying user yarn
2016-11-21 12:34:36,411 - User['mapred'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-21 12:34:36,412 - Modifying user mapred
2016-11-21 12:34:36,423 - User['hbase'] {'gid': 'hadoop', 'fetch_nonlocal_groups': True, 'groups': [u'hadoop']}
2016-11-21 12:34:36,423 - Modifying user hbase
2016-11-21 12:34:36,431 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-11-21 12:34:36,434 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] {'not_if': '(test $(id -u ambari-qa) -gt 1000) || (false)'}
2016-11-21 12:34:36,444 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh ambari-qa /tmp/hadoop-ambari-qa,/tmp/hsperfdata_ambari-qa,/home/ambari-qa,/tmp/ambari-qa,/tmp/sqoop-ambari-qa'] due to not_if
2016-11-21 12:34:36,444 - Directory['/tmp/hbase-hbase'] {'owner': 'hbase', 'create_parents': True, 'mode': 0775, 'cd_access': 'a'}
2016-11-21 12:34:36,445 - File['/var/lib/ambari-agent/tmp/changeUid.sh'] {'content': StaticFile('changeToSecureUid.sh'), 'mode': 0555}
2016-11-21 12:34:36,446 - Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] {'not_if': '(test $(id -u hbase) -gt 1000) || (false)'}
2016-11-21 12:34:36,451 - Skipping Execute['/var/lib/ambari-agent/tmp/changeUid.sh hbase /home/hbase,/tmp/hbase,/usr/bin/hbase,/var/log/hbase,/tmp/hbase-hbase'] due to not_if
2016-11-21 12:34:36,451 - Group['hdfs'] {}
2016-11-21 12:34:36,452 - User['hdfs'] {'fetch_nonlocal_groups': True, 'groups': [u'hadoop', 'hdfs', u'hdfs']}
2016-11-21 12:34:36,452 - Modifying user hdfs
2016-11-21 12:34:36,464 - FS Type:
2016-11-21 12:34:36,464 - Directory['/etc/hadoop'] {'mode': 0755}
2016-11-21 12:34:36,482 - File['/usr/hdp/current/hadoop-client/conf/hadoop-env.sh'] {'content': InlineTemplate(...), 'owner': 'root', 'group': 'hadoop'}
2016-11-21 12:34:36,484 - Directory['/var/lib/ambari-agent/tmp/hadoop_java_io_tmpdir'] {'owner': 'hdfs', 'group': 'hadoop', 'mode': 01777}
2016-11-21 12:34:36,498 - Execute[('setenforce', '0')] {'not_if': '(! which getenforce ) || (which getenforce && getenforce | grep -q Disabled)', 'sudo': True, 'only_if': 'test -f /selinux/enforce'}
2016-11-21 12:34:36,505 - Skipping Execute[('setenforce', '0')] due to only_if
2016-11-21 12:34:36,506 - Directory['/var/log/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'hadoop', 'mode': 0775, 'cd_access': 'a'}
2016-11-21 12:34:36,508 - Directory['/var/run/hadoop'] {'owner': 'root', 'create_parents': True, 'group': 'root', 'cd_access': 'a'}
2016-11-21 12:34:36,508 - Directory['/tmp/hadoop-hdfs'] {'owner': 'hdfs', 'create_parents': True, 'cd_access': 'a'}
2016-11-21 12:34:36,512 - File['/usr/hdp/current/hadoop-client/conf/commons-logging.properties'] {'content': Template('commons-logging.properties.j2'), 'owner': 'root'}
2016-11-21 12:34:36,513 - File['/usr/hdp/current/hadoop-client/conf/health_check'] {'content': Template('health_check.j2'), 'owner': 'root'}
2016-11-21 12:34:36,514 - File['/usr/hdp/current/hadoop-client/conf/log4j.properties'] {'content': ..., 'owner': 'hdfs', 'group': 'hadoop', 'mode': 0644}
2016-11-21 12:34:36,524 - File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] {'content': Template('hadoop-metrics2.properties.j2'), 'owner': 'hdfs', 'group': 'hadoop'}
2016-11-21 12:34:36,525 - Writing File['/usr/hdp/current/hadoop-client/conf/hadoop-metrics2.properties'] because contents don't match
2016-11-21 12:34:36,525 - File['/usr/hdp/current/hadoop-client/conf/task-log4j.properties'] {'content': StaticFile('task-log4j.properties'), 'mode': 0755}
2016-11-21 12:34:36,526 - File['/usr/hdp/current/hadoop-client/conf/configuration.xsl'] {'owner': 'hdfs', 'group': 'hadoop'}
2016-11-21 12:34:36,530 - File['/etc/hadoop/conf/topology_mappings.data'] {'owner': 'hdfs', 'content': Template('topology_mappings.data.j2'), 'only_if': 'test -d /etc/hadoop/conf', 'group': 'hadoop'}
2016-11-21 12:34:36,533 - Writing File['/etc/hadoop/conf/topology_mappings.data'] because contents don't match
2016-11-21 12:34:36,534 - File['/etc/hadoop/conf/topology_script.py'] {'content': StaticFile('topology_script.py'), 'only_if': 'test -d /etc/hadoop/conf', 'mode': 0755}
2016-11-21 12:34:36,799 - Stack Feature Version Info: stack_version=2.5.PROJECT, version=2.5.0.0-1245, current_cluster_version=2.5.0.0-1245 -> 2.5.0.0-1245
2016-11-21 12:34:36,815 - The hadoop conf dir /usr/hdp/current/hadoop-client/conf exists, will call conf-select on it for version 2.5.0.0-1245
2016-11-21 12:34:36,818 - Checking if need to create versioned conf dir /etc/hadoop/2.5.0.0-1245/0
2016-11-21 12:34:36,820 - call[('ambari-python-wrap', u'/usr/bin/conf-select', 'create-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False, 'stderr': -1}
2016-11-21 12:34:36,846 - call returned (1, '/etc/hadoop/2.5.0.0-1245/0 exist already', '')
2016-11-21 12:34:36,846 - checked_call[('ambari-python-wrap', u'/usr/bin/conf-select', 'set-conf-dir', '--package', 'hadoop', '--stack-version', '2.5.0.0-1245', '--conf-version', '0')] {'logoutput': False, 'sudo': True, 'quiet': False}
2016-11-21 12:34:36,875 - checked_call returned (0, '')
2016-11-21 12:34:36,875 - Ensuring that hadoop has the correct symlink structure
2016-11-21 12:34:36,875 - Using hadoop conf dir: /usr/hdp/current/hadoop-client/conf
2016-11-21 12:34:36,881 - checked_call['hostid'] {}
2016-11-21 12:34:36,888 - checked_call returned (0, 'a8c00867')
2016-11-21 12:34:36,897 - Directory['/etc/hbase'] {'mode': 0755}
2016-11-21 12:34:36,898 - Directory['/usr/hdp/current/hbase-client/conf'] {'owner': 'hbase', 'group': 'hadoop', 'create_parents': True}
2016-11-21 12:34:36,899 - Directory['/tmp'] {'create_parents': True, 'mode': 0777}
2016-11-21 12:34:36,899 - Changing permission for /tmp from 1777 to 777
2016-11-21 12:34:36,899 - Directory['/tmp'] {'create_parents': True, 'cd_access': 'a'}
2016-11-21 12:34:36,900 - Execute[('chmod', '1777', u'/tmp')] {'sudo': True}
2016-11-21 12:34:36,906 - XmlConfig['hbase-site.xml'] {'owner': 'hbase', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hbase-client/conf', 'configuration_attributes': {}, 'configurations': ...}
2016-11-21 12:34:36,926 - Generating config: /usr/hdp/current/hbase-client/conf/hbase-site.xml
2016-11-21 12:34:36,927 - File['/usr/hdp/current/hbase-client/conf/hbase-site.xml'] {'owner': 'hbase', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-11-21 12:34:36,972 - Writing File['/usr/hdp/current/hbase-client/conf/hbase-site.xml'] because contents don't match
2016-11-21 12:34:36,973 - XmlConfig['core-site.xml'] {'owner': 'hbase', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hbase-client/conf', 'configuration_attributes': {u'final': {u'fs.defaultFS': u'true'}}, 'configurations': ...}
2016-11-21 12:34:36,987 - Generating config: /usr/hdp/current/hbase-client/conf/core-site.xml
2016-11-21 12:34:36,987 - File['/usr/hdp/current/hbase-client/conf/core-site.xml'] {'owner': 'hbase', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-11-21 12:34:37,026 - XmlConfig['hdfs-site.xml'] {'owner': 'hbase', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hbase-client/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2016-11-21 12:34:37,046 - Generating config: /usr/hdp/current/hbase-client/conf/hdfs-site.xml
2016-11-21 12:34:37,046 - File['/usr/hdp/current/hbase-client/conf/hdfs-site.xml'] {'owner': 'hbase', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-11-21 12:34:37,105 - XmlConfig['hdfs-site.xml'] {'owner': 'hdfs', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hadoop-client/conf', 'configuration_attributes': {u'final': {u'dfs.support.append': u'true', u'dfs.datanode.data.dir': u'true', u'dfs.namenode.http-address': u'true', u'dfs.namenode.name.dir': u'true', u'dfs.webhdfs.enabled': u'true', u'dfs.datanode.failed.volumes.tolerated': u'true'}}, 'configurations': ...}
2016-11-21 12:34:37,116 - Generating config: /usr/hdp/current/hadoop-client/conf/hdfs-site.xml
2016-11-21 12:34:37,116 - File['/usr/hdp/current/hadoop-client/conf/hdfs-site.xml'] {'owner': 'hdfs', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-11-21 12:34:37,170 - XmlConfig['hbase-policy.xml'] {'owner': 'hbase', 'group': 'hadoop', 'conf_dir': '/usr/hdp/current/hbase-client/conf', 'configuration_attributes': {}, 'configurations': {u'security.masterregion.protocol.acl': u'*', u'security.admin.protocol.acl': u'*', u'security.client.protocol.acl': u'*'}}
2016-11-21 12:34:37,177 - Generating config: /usr/hdp/current/hbase-client/conf/hbase-policy.xml
2016-11-21 12:34:37,178 - File['/usr/hdp/current/hbase-client/conf/hbase-policy.xml'] {'owner': 'hbase', 'content': InlineTemplate(...), 'group': 'hadoop', 'mode': None, 'encoding': 'UTF-8'}
2016-11-21 12:34:37,187 - File['/usr/hdp/current/hbase-client/conf/hbase-env.sh'] {'content': InlineTemplate(...), 'owner': 'hbase', 'group': 'hadoop'}
2016-11-21 12:34:37,188 - Writing File['/usr/hdp/current/hbase-client/conf/hbase-env.sh'] because contents don't match
2016-11-21 12:34:37,188 - Directory['/etc/security/limits.d'] {'owner': 'root', 'create_parents': True, 'group': 'root'}
2016-11-21 12:34:37,191 - File['/etc/security/limits.d/hbase.conf'] {'content': Template('hbase.conf.j2'), 'owner': 'root', 'group': 'root', 'mode': 0644}
2016-11-21 12:34:37,191 - TemplateConfig['/usr/hdp/current/hbase-client/conf/hadoop-metrics2-hbase.properties'] {'owner': 'hbase', 'template_tag': 'GANGLIA-RS'}
2016-11-21 12:34:37,197 - File['/usr/hdp/current/hbase-client/conf/hadoop-metrics2-hbase.properties'] {'content': Template('hadoop-metrics2-hbase.properties-GANGLIA-RS.j2'), 'owner': 'hbase', 'group': None, 'mode': None}
2016-11-21 12:34:37,198 - Writing File['/usr/hdp/current/hbase-client/conf/hadoop-metrics2-hbase.properties'] because contents don't match
2016-11-21 12:34:37,198 - TemplateConfig['/usr/hdp/current/hbase-client/conf/regionservers'] {'owner': 'hbase', 'template_tag': None}
2016-11-21 12:34:37,200 - File['/usr/hdp/current/hbase-client/conf/regionservers'] {'content': Template('regionservers.j2'), 'owner': 'hbase', 'group': None, 'mode': None}
2016-11-21 12:34:37,201 - TemplateConfig['/usr/hdp/current/hbase-client/conf/hbase_client_jaas.conf'] {'owner': 'hbase', 'template_tag': None}
2016-11-21 12:34:37,202 - File['/usr/hdp/current/hbase-client/conf/hbase_client_jaas.conf'] {'content': Template('hbase_client_jaas.conf.j2'), 'owner': 'hbase', 'group': None, 'mode': None}
2016-11-21 12:34:37,203 - File['/usr/hdp/current/hbase-client/conf/log4j.properties'] {'content': ..., 'owner': 'hbase', 'group': 'hadoop', 'mode': 0644}
2016-11-21 12:34:37,203 - Package['phoenix_2_5_PROJECT_*'] {'retry_on_repo_unavailability': False, 'retry_count': 5}
2016-11-21 12:34:37,315 - Installing package phoenix_2_5_PROJECT_* ('/usr/bin/yum -d 0 -e 0 -y install 'phoenix_2_5_PROJECT_*'')
Command failed after 1 tries
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Phoenix
11-04-2016
01:12 PM
Finally spotted my mistake, was the SPNEGO kerberos configuration in Ambari was incorrect, I had the principal set to HTTP/auth-001@PROJECT1 instead of HTTP/_HOST@PROJECT1.
... View more
11-04-2016
09:35 AM
1 Kudo
Am creating a new cluster through Ambari using blueprints with version 2.4.1.0 of Ambari and HDP 2.5.0, the cluster is running FreeIPA and is kerberised, Ranger deploys fine and no errors are logged in the Ambari logs for the deployment of Ranger Admin or Ranger Usersync services, however when starting the namenode there are errors logged in the startup and the HDFS service is not created in Ranger Web UI. Have pasted in the relevant logs below and some of the manual commands I have run on the nodes to try to troubleshoot. Any help would be greatly appreciated. Namenode startup log stderr in Ambari; 2016-11-04 08:45:26,899 - Error in call for getting Ranger service:
No JSON object could be decoded
2016-11-04 08:54:10,812 - Error in call for creating Ranger service:
No JSON object could be decoded
2016-11-04 08:54:10,813 - Hdfs Repository creation failed in Ranger admin Namenode startup log stdout in Ambari; 2016-11-04 08:44:53,766 - checked_call['/usr/bin/kinit -c /var/lib/ambari-agent/tmp/curl_krb_cache/ranger_admin_calls_hdfs_cc_7b6e79b8fdca257bc6249b42083c151b -kt /etc/security/keytabs/nn.service.keytab nn/-nn-001.project1@PROJECT1 > /dev/null'] {'user': 'hdfs'}
2016-11-04 08:44:53,855 - checked_call returned (0, '')
2016-11-04 08:44:53,856 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -L -k --negotiate -u : -b /var/lib/ambari-agent/tmp/cookies/710d18ea-f3ae-44d0-804f-b7111ab429e6 -c /var/lib/ambari-agent/tmp/cookies/710d18ea-f3ae-44d0-804f-b7111ab429e6 -w '"'"'%{http_code}'"'"' http://auth-001.project1:6080/login.jsp --connect-timeout 10 --max-time 12 -o /dev/null 1>/tmp/tmppqhoiG 2>/tmp/tmpLIiD5C''] {'quiet': False, 'env': {'KRB5CCNAME': '/var/lib/ambari-agent/tmp/curl_krb_cache/ranger_admin_calls_hdfs_cc_7b6e79b8fdca257bc6249b42083c151b'}}
2016-11-04 08:44:53,924 - call returned (0, '')
2016-11-04 08:44:53,925 - call['/usr/bin/klist -s /var/lib/ambari-agent/tmp/curl_krb_cache/ranger_admin_calls_hdfs_cc_7b6e79b8fdca257bc6249b42083c151b'] {'user': 'hdfs'}
2016-11-04 08:44:53,980 - call returned (0, '')
2016-11-04 08:44:53,980 - call['ambari-sudo.sh su hdfs -l -s /bin/bash -c 'curl -L -k --negotiate -u : -b /var/lib/ambari-agent/tmp/cookies/3dbe7f89-811d-4dc5-be44-1dac2a6ac2aa -c /var/lib/ambari-agent/tmp/cookies/3dbe7f89-811d-4dc5-be44-1dac2a6ac2aa '"'"'http://auth-001.project1:6080/service/public/v2/api/service?serviceName=PROJECT1_Cluster_hadoop&serviceType=hdfs&isEnabled=true'"'"' --connect-timeout 10 --max-time 12 -X GET 1>/tmp/tmpAMnDmH 2>/tmp/tmp6PLCo5''] {'quiet': False, 'env': {'KRB5CCNAME': '/var/lib/ambari-agent/tmp/curl_krb_cache/ranger_admin_calls_hdfs_cc_7b6e79b8fdca257bc6249b42083c151b'}}
2016-11-04 08:44:54,054 - call returned (0, '')
2016-11-04 08:44:54,055 - Will retry 4 time(s), caught exception: Error in call for getting Ranger service:
No JSON object could be decoded. Sleeping for 8 sec(s)
xa_portal.log from Ranger admin machine auth-001 2016-11-04 08:54:10,828 [http-bio-6080-exec-5] WARN apache.ranger.security.web.filter.RangerKrbFilter (RangerKrbFilter.java:494) - Authentication exception: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos credentails)
org.apache.hadoop.security.authentication.client.AuthenticationException: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos credentails)
at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:400)
at org.apache.ranger.security.web.filter.RangerKrbFilter.doFilter(RangerKrbFilter.java:449)
at org.apache.ranger.security.web.filter.RangerKRBAuthenticationFilter.doFilter(RangerKRBAuthenticationFilter.java:285)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.servletapi.SecurityContextHolderAwareRequestFilter.doFilter(SecurityContextHolderAwareRequestFilter.java:54)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.savedrequest.RequestCacheAwareFilter.doFilter(RequestCacheAwareFilter.java:45)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.apache.ranger.security.web.filter.RangerSSOAuthenticationFilter.doFilter(RangerSSOAuthenticationFilter.java:211)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.www.BasicAuthenticationFilter.doFilter(BasicAuthenticationFilter.java:150)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.AbstractAuthenticationProcessingFilter.doFilter(AbstractAuthenticationProcessingFilter.java:183)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.authentication.logout.LogoutFilter.doFilter(LogoutFilter.java:105)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.context.SecurityContextPersistenceFilter.doFilter(SecurityContextPersistenceFilter.java:87)
at org.springframework.security.web.FilterChainProxy$VirtualFilterChain.doFilter(FilterChainProxy.java:342)
at org.springframework.security.web.FilterChainProxy.doFilterInternal(FilterChainProxy.java:192)
at org.springframework.security.web.FilterChainProxy.doFilter(FilterChainProxy.java:160)
at org.springframework.web.filter.DelegatingFilterProxy.invokeDelegate(DelegatingFilterProxy.java:346)
at org.springframework.web.filter.DelegatingFilterProxy.doFilter(DelegatingFilterProxy.java:259)
at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:241)
at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:208)
at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:220)
at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:122)
at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:505)
at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:169)
at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:103)
at org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:956)
at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:116)
at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:436)
at org.apache.coyote.http11.AbstractHttp11Processor.process(AbstractHttp11Processor.java:1078)
at org.apache.coyote.AbstractProtocol$AbstractConnectionHandler.process(AbstractProtocol.java:625)
at org.apache.tomcat.util.net.JIoEndpoint$SocketProcessor.run(JIoEndpoint.java:316)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at org.apache.tomcat.util.threads.TaskThread$WrappingRunnable.run(TaskThread.java:61)
at java.lang.Thread.run(Thread.java:745)
Caused by: GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos credentails)
at sun.security.jgss.krb5.Krb5AcceptCredential.getInstance(Krb5AcceptCredential.java:87)
at sun.security.jgss.krb5.Krb5MechFactory.getCredentialElement(Krb5MechFactory.java:127)
at sun.security.jgss.GSSManagerImpl.getCredentialElement(GSSManagerImpl.java:193)
at sun.security.jgss.spnego.SpNegoMechFactory.getCredentialElement(SpNegoMechFactory.java:142)
at sun.security.jgss.GSSManagerImpl.getCredentialElement(GSSManagerImpl.java:193)
at sun.security.jgss.GSSCredentialImpl.add(GSSCredentialImpl.java:427)
at sun.security.jgss.GSSCredentialImpl.<init>(GSSCredentialImpl.java:77)
at sun.security.jgss.GSSManagerImpl.createCredential(GSSManagerImpl.java:160)
at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:357)
at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler$2.run(KerberosAuthenticationHandler.java:349)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.authentication.server.KerberosAuthenticationHandler.authenticate(KerberosAuthenticationHandler.java:349)
... 38 more
Manual klist of kerberos keytab cache used by Ambari on nn-001; /usr/bin/klist /var/lib/ambari-agent/tmp/curl_krb_cache/ranger_admin_calls_hdfs_cc_7b6e79b8fdca257bc6249b42083c151b
Ticket cache: FILE:/var/lib/ambari-agent/tmp/curl_krb_cache/ranger_admin_calls_hdfs_cc_7b6e79b8fdca257bc6249b42083c151b
Default principal: nn/nn-001.project1@PROJECT1
Valid starting Expires Service principal
04/11/16 08:54:10 05/11/16 08:54:10 krbtgt/PROJECT1@PROJECT1
04/11/16 08:54:10 05/11/16 08:54:10 HTTP/auth-001.project1@PROJECT1
Manual run of curl command used by Ambari to query Ranger services on nn-001; curl -L -k --negotiate -u : -b /var/lib/ambari-agent/tmp/cookies/3dbe7f89-811d-4dc5-be44-1dac2a6ac2aa -c /var/lib/ambari-agent/tmp/cookies/3dbe7f89-811d-4dc5-be44-1dac2a6ac2aa 'http://auth-001.project1:6080/service/public/v2/api/service?serviceName=PROJECT1_Cluster_hadoop&serviceType=hdfs&isEnabled=true' --connect-timeout 10 --max-time 12 -X GET
<html><head><title>Apache Tomcat/7.0.68 - Error report</title><style><!--H1 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;} H2 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;} H3 {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;} BODY {font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B {font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;} P {font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A {color : black;}A.name {color : black;}HR {color : #525D76;}--></style> </head><body><h1>HTTP Status 403 - GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos credentails)</h1><HR size="1" noshade="noshade"><p><b>type</b> Status report</p><p><b>message</b> <u>GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos credentails)</u></p><p><b>description</b> <u>Access to the specified resource has been forbidden.</u></p><HR size="1" noshade="noshade"><h3>Apache Tomcat/7.0.68</h3></body></html>
Blueprint is configued to set the xasecure.audit.jaas.Client.option.keyTab to /etc/security/keytabs/rangeradmin.service.keytab and the principal to rangeradmin/_HOST@PROJECT1 klist -kt /etc/security/keytabs/rangeradmin.service.keytab
Keytab name: FILE:/etc/security/keytabs/rangeradmin.service.keytab
KVNO Timestamp Principal
---- ----------------- --------------------------------------------------------
1 03/11/16 16:47:02 rangeradmin/auth-001.project1@PROJECT1
1 03/11/16 16:47:02 rangeradmin/auth-001.project1@PROJECT1
1 03/11/16 16:47:02 rangeradmin/auth-001.project1@PROJECT1
1 03/11/16 16:47:02 rangeradmin/auth-001.project1@PROJECT1
... View more
Labels:
10-18-2016
09:21 AM
It seems the version of Metron I was using had some inconsistencies with the fieldTransformation being used in the enrichment config, it didn't recognise STELLAR as the transformation language. I downloaded the latest version of the source code (0.2.1BETA instead of 0.2.0BETA), followed the original process for building the full cluster and configuring Metron to add a telemetry source, after this I could then follow the steps to add the threat intel configuration and it is now enriching the data correctly. Thanks for all your help along the way @cduby it has been very much appreciated.
... View more
10-18-2016
07:42 AM
@cduby Apologies I wasn't very clear in my description of the problem, the threat intelligence has loaded into HBase fine, I can scan the 'threatintel' table in HBase and it returns the csv threat data that was uploaded. However when I ingest squid logs, they still look identical to before I added the threat enrichment to the squid enrichment config.
... View more
10-17-2016
02:08 PM
I am running the full-dev-platform of Metron version 0.2.0BETA and have added squid log data as per wiki guide and some help from @cduby with some configuration issues. Now trying to extend this to add threat intelligence alerting based upon wiki guide but am having issues with no enrichment data being added from the HBase table containing the CSV data, the geo enrichments are being added to the data however. Also the url in the example elasticsearch index is shown as "atmape.ru" and in my index it shows as "http://www.atmape.ru". Enrichment config for squid is in zookeeper as below; ENRICHMENT Config: squid
{
"index" : "squid",
"batchSize" : 5,
"enrichment" : {
"fieldMap" : {
"geo" : [ "ip_dst_addr", "ip_src_addr" ],
"host" : [ "host" ]
},
"fieldToTypeMap" : { },
"config" : { }
},
"threatIntel" : {
"fieldMap" : {
"hbaseThreatIntel" : [ "ip_src_addr", "ip_dst_addr", "url" ]
},
"fieldToTypeMap" : {
"ip_src_addr" : [ "malicious_ip" ],
"ip_dst_addr" : [ "malicious_ip" ],
"url" : [ "zeusList" ]
},
"config" : { },
"triageConfig" : {
"riskLevelRules" : { },
"aggregator" : "MAX",
"aggregationConfig" : { }
}
},
"configuration" : { }
}
... View more
Labels:
- Labels:
-
Apache Metron
10-17-2016
10:36 AM
@cduby Thanks for all your help along the way I think I am finally up and running now. Found the issue with the enrichments, it was that the squid logs I had generated were missing the destination IP address, once I regenerated these, cleared the kafka queues and restarted the topologies the data started flowing through into elastic index. Then to get around the timestamp issue I had to curl in a template to elastic to create a template for the squid data with the timestamp field specified as a date as below; curl -XPUT http://node1:9200/_template/squid -d '{"template":"squid*","mappings": {"squid*": {"properties": {"timestamp": { "type": "date" }}}}}'
... View more
10-14-2016
01:05 PM
In case anyone has the same issue I resolved by manually starting flume via shell access to the node; /usr/hdp/current/flume-server/bin/flume-ng agent -n snort -c /usr/hdp/current/flume-server/conf -f /usr/hdp/current/flume-server/conf/flume-snort.conf
... View more
10-14-2016
09:24 AM
@cduby Thanks for that have removed the historical index from elastic so I now have no squid indexes, however I am back to the previous problem with the enrichmentJoinBolt, I have checked and MySQL is running and no errors are showing in the geoEnrichmentBolt, I am also getting data from bro and yaf showing up in elastic. Have worked through the troubleshooting article and cannot see any problems, the only thing I can think of is that the enrichment config I have provided has something incorrect in it although I cannot see what. Pasted below is the error in the Storm geo enrichment bolt and a few of the logs either side, there are no other errors in Storm UI. 2016-10-14 09:11:47 b.s.d.executor [INFO] Prepared bolt simpleHBaseEnrichmentBolt:(8)
2016-10-14 09:12:07 o.a.m.e.b.JoinBolt [ERROR] [Metron] Unable to join messages: {"enrichments.geo.ip_dst_addr":"","adapter.geoadapter.end.ts":"1476436327576","enrichments.geo.ip_src_addr":"","adapter.geoadapter.begin.ts":"1476436327576","source.type":"squid"}
java.lang.NullPointerException: null
at org.apache.metron.enrichment.bolt.EnrichmentJoinBolt.joinMessages(EnrichmentJoinBolt.java:76) ~[stormjar.jar:na]
at org.apache.metron.enrichment.bolt.EnrichmentJoinBolt.joinMessages(EnrichmentJoinBolt.java:33) ~[stormjar.jar:na]
at org.apache.metron.enrichment.bolt.JoinBolt.execute(JoinBolt.java:111) ~[stormjar.jar:na]
at backtype.storm.daemon.executor$fn__7014$tuple_action_fn__7016.invoke(executor.clj:670) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.daemon.executor$mk_task_receiver$fn__6937.invoke(executor.clj:426) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.disruptor$clojure_handler$reify__6513.onEvent(disruptor.clj:58) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:125) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.daemon.executor$fn__7014$fn__7027$fn__7078.invoke(executor.clj:808) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.util$async_loop$fn__545.invoke(util.clj:475) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
2016-10-14 09:12:07 b.s.d.executor [ERROR]
java.lang.NullPointerException: null
at org.apache.metron.enrichment.bolt.EnrichmentJoinBolt.joinMessages(EnrichmentJoinBolt.java:76) ~[stormjar.jar:na]
at org.apache.metron.enrichment.bolt.EnrichmentJoinBolt.joinMessages(EnrichmentJoinBolt.java:33) ~[stormjar.jar:na]
at org.apache.metron.enrichment.bolt.JoinBolt.execute(JoinBolt.java:111) ~[stormjar.jar:na]
at backtype.storm.daemon.executor$fn__7014$tuple_action_fn__7016.invoke(executor.clj:670) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.daemon.executor$mk_task_receiver$fn__6937.invoke(executor.clj:426) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.disruptor$clojure_handler$reify__6513.onEvent(disruptor.clj:58) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:125) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.daemon.executor$fn__7014$fn__7027$fn__7078.invoke(executor.clj:808) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.util$async_loop$fn__545.invoke(util.clj:475) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
2016-10-14 09:12:42 b.s.m.n.Server [INFO] Getting metrics for server on port 6703
2016-10-14 09:12:45 s.k.ZkCoordinator [INFO] Task [1/1] Refreshing partition manager connections
2016-10-14 09:12:45 s.k.DynamicBrokersReader [INFO] Read partition info from zookeeper: GlobalPartitionInformation{partitionMap={0=node1:6667}}
... View more
10-13-2016
02:27 PM
Have setup an instance of Metron in a single instance VM, bro and yaf data is flowing through into elastic search indexes, however there seems to be an error with flume starting up to ingest the snort logs. I am getting the below error in the flume logs, however I cannot see a reference to a /snort folder in the flume-snort.conf file. 13 Oct 2016 14:06:33,238 ERROR [main] (org.apache.flume.node.Application.main:307) - A fatal error occurred while running. Exception follows.
org.apache.commons.cli.ParseException: The specified configuration file does not exist: /snort
Any help would be greatly appreciated.
... View more
Labels:
- Labels:
-
Apache Flume
-
Apache Metron
10-13-2016
08:40 AM
Fixed the storm issue it was to do with a backup I took of the storm local data directory when I was having problems starting the storm supervisor. I restored the nimbus/stormdist data from my backup and it is started up correctly now. So next step is to look at the indexing in elasticsearch.
... View more
10-13-2016
08:36 AM
Fixed now thanks, the issue was that I had backed up my storm local data previously because of an issue with the supervisor starting. Once I copied the nimbus/stormdist folder back into the storm local data folder and started nimbus again it all came up correctly.
... View more
10-13-2016
08:12 AM
Actually I have found a section further up in the log that might be the actual error; 2016-10-13T08:08:54.289+0000 b.s.zookeeper [INFO] node1 gained leadership, checking if it has all the topology code locally.
2016-10-13T08:08:54.297+0000 b.s.zookeeper [INFO] active-topology-ids [yaf-1-1476261289,bro-11-1476195353,squid-15-1476196296,enrichment-17-1476215724,snort-13-1476195443] local-topology-ids [enrichment-10-1476302158,bro-8-1476302148,yaf-7-1476302143,snort-9-1476302153] diff-topology [yaf-1-1476261289,bro-11-1476195353,squid-15-1476196296,enrichment-17-1476215724,snort-13-1476195443]
2016-10-13T08:08:54.299+0000 b.s.zookeeper [INFO] code for all active topologies not available locally, giving up leadership.
... View more
10-13-2016
08:04 AM
@Santhosh B Gowda Thanks for your response. Nimbus is showing as running without errors in Ambari and the service looks to be up correctly. I am getting an error in the nimbus.log as below; 2016-10-13T08:00:58.228+0000 o.a.t.s.AbstractNonblockingServer$FrameBuffer [ERROR] Unexpected throwable while invoking!
java.lang.RuntimeException: No nimbus leader participant host found, have you started your nimbus hosts?
at backtype.storm.zookeeper$to_NimbusInfo.invoke(zookeeper.clj:233) ~[storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.zookeeper$zk_leader_elector$reify__1009.getLeader(zookeeper.clj:305) ~[storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at sun.reflect.GeneratedMethodAccessor22.invoke(Unknown Source) ~[na:na]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[na:1.8.0_40]
at java.lang.reflect.Method.invoke(Method.java:497) ~[na:1.8.0_40]
at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) ~[clojure-1.6.0.jar:na]
at clojure.lang.Reflector.invokeNoArgInstanceMember(Reflector.java:313) ~[clojure-1.6.0.jar:na]
at backtype.storm.daemon.nimbus$fn__6231$exec_fn__1296__auto__$reify__6250.getClusterInfo(nimbus.clj:1349) ~[storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.generated.Nimbus$Processor$getClusterInfo.getResult(Nimbus.java:1812) ~[storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.generated.Nimbus$Processor$getClusterInfo.getResult(Nimbus.java:1796) ~[storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at org.apache.thrift7.ProcessFunction.process(ProcessFunction.java:39) ~[storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at org.apache.thrift7.TBaseProcessor.process(TBaseProcessor.java:39) ~[storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.security.auth.SimpleTransportPlugin$SimpleWrapProcessor.process(SimpleTransportPlugin.java:159) ~[storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at org.apache.thrift7.server.AbstractNonblockingServer$FrameBuffer.invoke(AbstractNonblockingServer.java:518) ~[storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at org.apache.thrift7.server.Invocation.run(Invocation.java:18) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_40]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_40]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40]
Then the below is occurring in the ui.log 2016-10-13T08:02:08.239+0000 b.s.u.NimbusClient [WARN] Ignoring exception while trying to get leader nimbus info from node1. will retry with a different seed host.
org.apache.thrift7.transport.TTransportException: null
at org.apache.thrift7.transport.TIOStreamTransport.read(TIOStreamTransport.java:132) ~[storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at org.apache.thrift7.transport.TTransport.readAll(TTransport.java:86) ~[storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at org.apache.thrift7.transport.TFramedTransport.readFrame(TFramedTransport.java:129) ~[storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at org.apache.thrift7.transport.TFramedTransport.read(TFramedTransport.java:101) ~[storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at org.apache.thrift7.transport.TTransport.readAll(TTransport.java:86) ~[storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at org.apache.thrift7.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:429) ~[storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at org.apache.thrift7.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:318) ~[storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at org.apache.thrift7.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:219) ~[storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at org.apache.thrift7.TServiceClient.receiveBase(TServiceClient.java:69) ~[storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.generated.Nimbus$Client.recv_getClusterInfo(Nimbus.java:559) ~[storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.generated.Nimbus$Client.getClusterInfo(Nimbus.java:547) ~[storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.utils.NimbusClient.getConfiguredClientAs(NimbusClient.java:68) ~[storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.ui.core$nimbus_summary.invoke(core.clj:580) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.ui.core$fn__10249.invoke(core.clj:982) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at compojure.core$make_route$fn__1889.invoke(core.clj:93) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at compojure.core$if_route$fn__1877.invoke(core.clj:39) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at compojure.core$if_method$fn__1870.invoke(core.clj:24) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at compojure.core$routing$fn__1895.invoke(core.clj:106) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
... View more
10-13-2016
07:57 AM
@cduby Yes that is the tutorial I am following, however I have another problem now with the Storm UI throwing an error, even though Ambari is showing that all the Storm components are running fine. java.lang.RuntimeException: Could not find leader nimbus from seed hosts ["node1"]. Did you specify a valid list of nimbus hosts for config nimbus.seeds I have posted it in a separate topic as I think the enrichment issue is now resolved as I can see index data in elastic. How did you delete the index? Was that through elastic, and was it just a case of pushing the logs to the kafka topic that created the new index and resolved the issue. Thanks again for all your help, really appreciate it.
... View more
10-12-2016
09:39 AM
@cduby Thanks for the pointers, I restarted the Metron VM and started the services again and it seems to be up and running now, I can see an index in elastic for the squid data. My next problem however is that when I try to create an index in elastic for the squid data it can't resolve any Time-field names and the dropdown is blank. I have inspected the data in the head plugin and there is definitely a "timestamp" field held under "_source" field in the JSON data.
... View more
10-12-2016
09:32 AM
Am running the full-dev environment single node VM for Metron, after restarting the node and starting the services through Ambari there were some issues with Storm. I cleared out the storm.local.dir and restarted which seems to have allowed all the Storm services to start through Ambari, but when I access the web UI for Storm there is an internal server error. java.lang.RuntimeException: Could not find leader nimbus from seed hosts ["node1"]. Did you specify a valid list of nimubs hosts for config nimbus.seeds. I have tried restarting zookeeper but am not sure how to clear out the zookeeper configuration in Metron environment. Any help would be greatly appreciated.
... View more
Labels:
- Labels:
-
Apache Metron
-
Apache Storm
10-11-2016
08:26 PM
@cduby
Thanks the missing enrichment config was definitely the issue, I copied the yaf.json and amended the index JSON field to squid instead of yaf and then uploaded as per your instructions. Below is the squid.json file I created. {
"index":"squid",
"batchSize": 5,
"enrichment" : {
"fieldMap":
{
"geo": ["ip_dst_addr", "ip_src_addr"]
}
},
"threatIntel": {
"fieldMap":
{
"hbaseThreatIntel": ["ip_src_addr", "ip_dst_addr"]
},
"fieldToTypeMap":
{
"ip_src_addr" : ["malicious_ip"],
"ip_dst_addr" : ["malicious_ip"]
}
}
} However I am now getting another error in the enrichment join bolt as below 2016-10-11 19:59:09 o.a.m.e.b.JoinBolt [ERROR] [Metron] Unable to join messages: {"enrichments.geo.ip_dst_addr":"","adapter.geoadapter.end.ts":"1476215988122","enrichments.geo.ip_src_addr":"
","adapter.geoadapter.begin.ts":"1476215988122","source.type":"squid"}
... View more
10-11-2016
03:00 PM
Am running through the tutorial to add a new telemetry source into Metron and have encountered a problem with the enrichmentJoinBolt in Storm, it is failing to process any of the messages that the Squid topology has process with the below error; 2016-10-11 14:32:09 o.a.m.e.b.EnrichmentSplitterBolt [ERROR] Unable to retrieve a sensor enrichment config of squid
2016-10-11 14:32:09 o.a.m.e.b.EnrichmentJoinBolt [ERROR] Unable to retrieve a sensor enrichment config of squid
2016-10-11 14:32:09 o.a.m.e.b.JoinBolt [ERROR] [Metron] Unable to join messages: {"code":0,"method":"GET","enrichmentsplitterbolt.splitter.end.ts":"1476196329341","enrichmentsplitterbolt.splitter.begin.ts":"1476196329341","url":"https:\/\/tfl.gov.uk\/plan-a-journey\/","source.type":"squid","elapsed":31271,"ip_dst_addr":null,"original_string":"1476113538.772 31271 127.0.0.1 TCP_MISS\/000 0 GET https:\/\/tfl.gov.uk\/plan-a-journey\/ - DIRECT\/tfl.gov.uk -","bytes":0,"action":"TCP_MISS","ip_src_addr":"127.0.0.1","timestamp":1476113538772}
java.lang.NullPointerException: null
at org.apache.metron.enrichment.bolt.EnrichmentJoinBolt.joinMessages(EnrichmentJoinBolt.java:76) ~[stormjar.jar:na]
at org.apache.metron.enrichment.bolt.EnrichmentJoinBolt.joinMessages(EnrichmentJoinBolt.java:33) ~[stormjar.jar:na]
at org.apache.metron.enrichment.bolt.JoinBolt.execute(JoinBolt.java:111) ~[stormjar.jar:na]
at backtype.storm.daemon.executor$fn__7014$tuple_action_fn__7016.invoke(executor.clj:670) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.daemon.executor$mk_task_receiver$fn__6937.invoke(executor.clj:426) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.disruptor$clojure_handler$reify__6513.onEvent(disruptor.clj:58) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.utils.DisruptorQueue.consumeBatchToCursor(DisruptorQueue.java:125) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.utils.DisruptorQueue.consumeBatchWhenAvailable(DisruptorQueue.java:99) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.disruptor$consume_batch_when_available.invoke(disruptor.clj:80) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.daemon.executor$fn__7014$fn__7027$fn__7078.invoke(executor.clj:808) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at backtype.storm.util$async_loop$fn__545.invoke(util.clj:475) [storm-core-0.10.0.2.3.0.0-2557.jar:0.10.0.2.3.0.0-2557]
at clojure.lang.AFn.run(AFn.java:22) [clojure-1.6.0.jar:na]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_40] I am using the full-dev environment with Metron 0.2.0BETA and the guide, https://cwiki.apache.org/confluence/display/METRON/2016/04/25/Metron+Tutorial+-+Fundamentals+Part+1%3A+Creating+a+New+Telemetry I can see data in the kibana dashboard from Bro and Yaf, which both also have indexes created in elastic, however there is no index for the squid data. I tried killing the Storm topologies and re-running ./run_enrichment_role.sh then after this restarting the squid parser topology. Any help would be greatly appreciated.
... View more
Labels:
- Labels:
-
Apache Metron
-
Apache Storm