Member since
08-31-2015
81
Posts
115
Kudos Received
17
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2902 | 03-22-2017 03:51 PM | |
1783 | 05-04-2016 09:34 AM | |
1394 | 03-24-2016 03:07 PM | |
1546 | 03-24-2016 02:54 PM | |
1476 | 03-24-2016 02:47 PM |
03-22-2017
07:21 PM
Looks like openscoring also offers jpmml under a BSD license for a fee, see below. Unfortunately, it appears there's a gray area between "we just want to use the software" and "want to redistribute proprietary software based on this code." The wording of the attached blurb from openscoring suggests they think "use" of AGPL code is fine, even though the FSF stance seems to be that GNU AGPL is only compatible w/ GPL: https://www.gnu.org/licenses/why-affero-gpl.en.html
... View more
05-05-2016
11:48 PM
2 Kudos
In previous article of the sereies, Enriching Telemetry Events, we walked through how to enrich a domain element of a given telemetry event with WhoIs data like home country, company associated with domain, etc. In this article, we will enrich with a special type of data called threat intel feeds. When a given telemetry event matches data in a threat Intel feed, an alert is generated. Again, the customers requirement are the following:
The proxy events from Squid logs needs to ingested in real-time. The proxy logs has to be parsed into a standardized JSON structure that Metron can understand. In real-time, the squid proxy event needs to be enriched so that the domain named are enriched with the IP information In real-time, the IP with in the proxy event must be checked against for threat intel feeds. If there is a threat intel hit, an alert needs to be raised. The end user must be able to see the new telemetry events and the alerts from the new data source. All of this requirements will need to be implemented easily without writing any new java code. In this article, we will walk you through how to do 4 and 5. Threat Intel Framework Explained Metron currently provides an extensible framework to plug in threat intel sources. Each threat intel source has two components: an enrichment data source and and enrichment bolt. The threat intelligence feeds are bulk loaded and streamed into a threat intelligence store similarly to how the enrichment feeds are loaded. The keys are loaded in a key-value format. The key is the indicator and the value is the JSON formatted description of what the indicator is. It is recommended to use a threat feed aggregator such as Soltra to dedup and normalize the feeds via Stix/Taxii. Metron provides an adapter that is able to read Soltra-produced Stix/Taxii feeds and stream them into Hbase, which is the data store of choice to back high speed threat intel lookups of Metron. Metron additionally provides a flat file and Stix bulk loader that can normalize, dedup, and bulk load or stream threat intel data into Hbase even without the use of a threat feed aggregator. The below diagram illustrates the architecture: Step 1: Threat Intel Feed Source Metron is designed to work with Stix/Taxii threat feeds, but can also be bulk loaded with threat data from a CSV file. In this example we will explore the CSV example. The same loader framework that is used for enrichment here is used for threat intelligence. Similarly to enrichments we need to setup a data.csv file, the extractor config JSON and the enrichment config JSON. For this example we will be using a Zeus malware tracker list located here: https://zeustracker.abuse.ch/blocklist.php?download=domainblocklist. Copy the data form the above link into a file called domainblocklist.txt on your VM. Run the following command to parse the above file to a csv file called domainblocklist.csv cat domainblocklist.txt | grep -v "^#" | grep -v "^$" | grep -v "^https" | awk '{print $1",abuse.ch”}' > domainblocklist.csv Now that we have the "Threat Intel Feed Source" , we need to now configure an extractor config file that describes the the source. Create a file called extractor_config_temp.json and put the following contents in it. {
"config" : {
"columns" : {
"domain" : 0
,"source" : 1
}
,"indicator_column" : "domain"
,"type" : "zeusList"
,"separator" : ","
}
,"extractor" : "CSV"
}
Run the following to remove the non-ascii characters we run the following: iconv -c -f utf-8 -t ascii extractor_config_temp.json -o extractor_config.json
Step 2: Configure Element to Threat Intel Feed Mapping We now have to configure what element of a tuple and what threat intel feed to cross-reference with.This configuration will be stored in zookeeper. The config looks like the following: {
"zkQuorum" : "node1:2181"
,"sensorToFieldList" : {
"bro" : {
"type" : "THREAT_INTEL"
,"fieldToEnrichmentTypes" : {
"url" : [ "zeusList" ]
}
}
}
}
Cut and paste this file into a file called "enrichment_config_temp.json" on the virtual machine. Because copying and pasting from this blog will include some non-ascii invisible characters, to strip them out please run iconv -c -f utf-8 -t ascii enrichment_config_temp.json -o enrichment_config.json
iconv -c -f utf-8 -t ascii enrichment_config_temp.json -o enrichment_config.json Step 3: Run the Threat Intel Loader Now that we have the threat intel source and threat intel config defined, we can now run the loader to move the data from the threat intel source to the Metron threat intel Store and store the enrichment config in zookeeper. /usr/metron/0.1BETA/bin/flatfile_loader.sh -n enrichment_config.json -i abuse.csv -t threatintel -c t -e extractor_config.json
After this, the threat intel data will be loaded in Hbase and a Zookeeper mapping will be established. The data will be populated into Hbase table called threatintel. To verify that the logs were properly ingested into Hbase run the following command: hbase shell
scan 'threatintel'
You should see the table bulk loaded with data from the CSV file. Now check if Zookeeper enrichment tag was properly populated: /usr/metron/0.1BETA/bin/zk_load_configs.sh -z localhost:2181 Generate some data by using the squid client to execute http requests (do this about 20 times) squidclient http://www.alamman.com
squidclient http://www.atmape.ru
View the Threat Alerts in Metron UI
When the logs are ingested we get messages that has a hit against threat intel: Notice a couple of characteristics about this message. It has is_alert=true, which designates it as an alert message. Now that we have alerts coming through we need to visualize them in Kibana. First, we need to setup a pinned query to look for messages where is_alert=true: And then once we point the alerts table to this pinned query it looks like this:
... View more
Labels:
05-04-2016
09:34 AM
1 Kudo
@Ryan Cicak good question. @drussell is correct. For Metron TP1, to prevent from using another m4.xlarge ec2 instance and give more resources to the other services, we chose only to use 1 zookeeper. But in production, we would have a minimum of at least 3 for the quorum. Given that we are going to be using zookeeper for other services managing metron's own configs (enrichment config, theat intel config, etc..) and in the future support for SOLR will require Zookeeper, possibly more than 3 will be required.
... View more
02-16-2018
12:29 PM
@George Vetticaden : I have tried the above steps in my hcp cluster with hdp - 2.5.3.0 along with metron UI manager. I don't need to do step 2 right ? This is the same as the enrichment configuration done via metron UI right ? My enrichment configuration json is as follows. This will suffice here for step 2 right ? I ran the file loader script without -n option. /usr/metron/0.1BETA/bin/flatfile_loader.sh -i whois_ref.csv -t enrichment -c t -e extractor_config.json {
"enrichment": {
"fieldMap": {},
"fieldToTypeMap": {
"url": [
"whois"
]
},
"config": {}
},
"threatIntel": {
"fieldMap": {},
"fieldToTypeMap": {},
"config": {},
"triageConfig": {
"riskLevelRules": [],
"aggregator": "MAX",
"aggregationConfig": {}
}
},
"configuration": {}
}
<br> Unfortunately my enrichment is not working. My kafka topic message coming in indexing topic is as follows. {"code":200,"method":"GET","enrichmentsplitterbolt.splitter.end.ts":"1518783891207","enrichmentsplitterbolt.splitter.begin.ts":"1518783891207","is_alert":"true","url":"https:\/\/www.woodlandworldwide.com\/","source.type":"newtest","elapsed":2033,"ip_dst_addr":"182.71.43.17","original_string":"1518783890.244 2033 127.0.0.1 TCP_MISS\/200 49602 GET https:\/\/www.woodlandworldwide.com\/ - HIER_DIRECT\/182.71.43.17 text\/html\n","threatintelsplitterbolt.splitter.end.ts":"1518783891211","threatinteljoinbolt.joiner.ts":"1518783891213","bytes":49602,"enrichmentjoinbolt.joiner.ts":"1518783891209","action":"TCP_MISS","guid":"40ff89bf-71a1-4eec-acfd-d89886c9ce7f","threatintelsplitterbolt.splitter.begin.ts":"1518783891211","ip_src_addr":"127.0.0.1","timestamp":1518783890244}
<br> I have tried adding both https:www.woodlandworldwide.com and just woodlandworldwide.com as in your example. But no luck. How metron queries hbase table ? Will it query to get domain similiar to url ?
... View more
06-03-2016
08:03 PM
That certainly works. It has the same effect as adding an available port to Storm so that the new topology can be run. Not sure which is cleaner -- have a user kill another topology before deploying the squid one, update the canned storm config to have a port available, or have a user update the config and restart Storm and related services.
... View more
05-02-2016
05:22 PM
3 Kudos
One of the key design principles of Apache Metron is that it should be easily extensible. We envision many users using Metron as a platform and building custom capabilities on top of it; one of which will be to add new telemetry data sources. In this multi-part article series, we will walk you through how to add a new data telemetry data source: Squid proxy logs. This multi-part article series consists of the following:
This Article: Sets up the use case for this multi-part article series Use Case 1: Collecting and Parsing Telemetry Events - This tutorial walks you through how to collect/ingest events into Metron and then parse them. Use Case 2: Enriching Telemetry Data - Describes how to enrich elements of telemetry events with Apache Metron. Use Case 3: Adding/Enriching/Validating with Threat Intel Feeds - Describes how to add new threat intel feeds to the system and how those feeds can be used to cross-reference every telemetry event that comes in. When a hit occurs, an alert will be generated and displayed on the Metron UI. Setting up the Use Case Scenario Customer Foo has installed Metron TP1 and they are using the out-of-the-box data sources (PCAP, YAF/Netflow, Snort, and Bro). They love Metron! But now they want to add a new data source to the platform: Squid proxy logs. Customer Foo's Requirements The following are the customer's requirements for Metron with respect to this new data source:
The proxy events from Squid logs need to be ingested in real-time. The proxy logs must be parsed into a standardized JSON structure that Metron can understand. In real-time, the Squid proxy event needs to be enriched so that the domain names are enriched with the IP information. In real-time, the IP within the proxy event must be checked for threat intel feeds. If there is a threat intel hit, an alert needs to be raised. The end user must be able to see the new telemetry events and the alerts from the new data source. All of these requirements will need to be implemented easily without writing any new Java code. What is Squid? Squid is a caching proxy for the Web supporting HTTP, HTTPS, FTP, and more. It reduces bandwidth and improves response times by caching and reusing frequently-requested web pages. For more information on Squid see Squid-cache.org. How Metron Enriches a Squid Telemetry Event When you make an outbound http connection to https://www.cnn.com from a given host, the following entry is added to a Squid file called access.log. The following represents the magic that Metron will do to this telemetry event as it is streamed through the platform in real-time: Key Points Some key points to highlight as you go this multi-part article series
We will be adding a net new data source without writing any code. Metron strives for easy extensibility and this is a good example of it. This is a repeatable pattern for a majority of telemetry data sources. Read the next article, on how to collect and push data into Metron and then parse data in the Metron platform: Collecting and Parsing Telemetry Data.
... View more
Labels:
04-14-2016
08:31 AM
Hello @George Vetticaden Again, as concerned in source of Metron, there is no Hive but metron new architecture picture which is shown in above, shows as HDFS Bolt to Hive in enrichment storm topology. However, currently, there is only available HBase that is used in Metron for big data store? Also as shown in Metron currently just one Index Bolt(actually writer bolts) to HDFS and/or ES/Solrin enrichment topology, also for only pcap only is writen to HDFS and HBase without indexing, so there is no Alert Bolt and Kafka Bolt? Are they planned new feature or? Thanks in advance.
... View more
04-27-2016
09:22 AM
Good question @Matt McKnight. We will have support for Solr indexing services in Metron TP2 which is slated for end of May. However in TP2, we will still only support Metron UI that is based on Kibana (based on Elastic). This will change in subsequent reelases. So net net, by middle/end of May we will support Solr indexing but you would have to write the UI that calls the SOLR Apis for search queries. Farther down the line, we will provide a custom UI (away from Kibana) that uses SOLR to do search. Make sense?
... View more
04-12-2016
05:24 PM
Thanks George, this is a very insightful and deep level of information.
... View more
10-19-2018
01:04 PM
In Apache-metron logical architecture, there are modules which is processed by apache storm and i understand the concepts how apache storm normalize and parse log data which accepted from kafka, but i am not clear about how apache storm tag and validate ?
... View more