Member since
03-23-2016
56
Posts
20
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1325 | 03-16-2018 01:47 PM | |
843 | 11-28-2017 06:41 PM | |
3663 | 10-04-2017 02:19 PM | |
958 | 09-16-2017 07:19 PM | |
2125 | 01-03-2017 05:52 PM |
07-09-2018
07:14 PM
You have only configured the plugin to push HTTP logs to Kafka; not Conn logs. If you expect to push the Conn logs, then configure those to be sent like Example 3 in the README. Or just start with a simpler configuration like this, which will send only the Conn logs. @load packages/metron-bro-plugin-kafka/Apache/Kafka
redef Kafka::logs_to_send = set(Conn::LOG);
redef Kafka::topic_name = "bro";
redef Kafka::kafka_conf = table(
["metadata.broker.list"] = "kafkaip:6667")
);
... View more
06-29-2018
03:43 PM
Since my original comment (from Sept 2017), we have split-out the Metron Mpack from the Elasticsearch MPack. They are now separate packs that each need installed into Ambari. Once you install the Elasticsearch MPack, you will be given the option to install and manage Elasticsearch and Kibana from Ambari.
... View more
04-17-2018
12:57 PM
That is probably a transient issue that you can work through by simply retrying. On the command line, simply run `vagrant provision`. See the README for more information. I believe this is documented in the README. I would also suggest trying a more recent version of Metron.
... View more
03-23-2018
01:54 PM
Please open a separate question if you have issues with Threat Triage.
... View more
03-22-2018
07:33 PM
> I would like to figure out the reason for alerts not turning up in the metron-alerts UI ? I believe it is because the Alerts UI has not been configured to look at the `profiler_index_*` indices that have been created. Right now, the Alerts UI only looks at the indices that have been created for each sensor. > I am seeing is_alerts=”true” for all the records under profiler_index_*. The `is_alerts` value is set to true when those messages are generated by the Profiler. The purpose of sending messages back into Kafka from the Profiler, is to enable use of the Threat Triage mechanism. That is why they are always set to true. > How can I configure to set is_alert=”true” only when the count exceeds the threshold value ? You would do this by defining a rule in Threat Triage that increase the threat score based on that count exceeding a threshold.
... View more
03-21-2018
08:51 PM
Did my answer help? If so, please mark it so.
... View more
03-16-2018
01:47 PM
Hi Anil - One problem here is that a failed assignment expression in the REPL does not provide a helpful error message. I submitted a fix for this here https://github.com/apache/metron/pull/966. To work around that in the REPL, you can just do something like the following to test your Profiler definition; basically don't use assignment. [Stellar]>>> conf := SHELL_EDIT(conf)
{
"profiles":[
{
"profile":"demo_iplogon_failed",
"foreach":"ip_address",
"onlyif":"source.type == 'demo_windowsnxlog' and event_id == 4625",
"init":{
"count":"0"
},
"update":{
"count":"count + 1"
},
"result":{
"profile":"count",
"triage":{
"logon_failed_count":"count"
}
}
}
]
}
[Stellar]>>>
[Stellar]>>> PROFILER_INIT(conf)
The issue with the profile definition, is that you don't have a 'result/profile' expression. The 'result/profile' expression which persists the data in HBase is required. Just add one like so below. [Stellar]>>> conf
{
"profiles":[
{
"profile":"demo_iplogon_failed",
"foreach":"ip_address",
"onlyif":"source.type == 'demo_windowsnxlog' and event_id == 4625",
"init":{
"count":"0"
},
"update":{
"count":"count + 1"
},
"result":{
"profile":"count",
"triage":{
"logon_failed_count":"count"
}
}
}
]
}
[Stellar]>>> PROFILER_INIT(conf)
Profiler{1 profile(s), 0 messages(s), 0 route(s)}
... View more
12-14-2017
12:49 AM
Do you see anything in the Alerts UI? Or is just this specific Squid data that is missing? What version of Metron are you running? How did you deploy Metron?
... View more
12-13-2017
11:09 PM
Only 'alerts' will appear in the Alerts UI. So what is an alert then, you ask? Well, not all telemetry in Metron is treated as an alert. Only telemetry that is specifically marked with a field { "is_alert": "true" } is treated as an alert. This gives the user the flexibility to define which telemetry will go through additional threat triage processing. In your case, the Squid telemetry does not have this field and so is not treated as an alert. For testing purposes, you can add this field to your Squid telemetry by creating a simple enrichment that adds the field "is_alert" and sets it to "true". Hope this makes sense.
... View more
12-12-2017
06:36 PM
I would guess that you have not installed the Hostmanager Vagrant Plugin. Make sure that you have all of these prerequisites installed. In that same README there are simple instructions for getting the prerequisites installed on a Mac. If that does not work, then please provide more information. What platform are you running on? What version of Metron are you running? What directory are you running the 'vagrant up' command in? Providing the output of running 'metron-deployment/scripts/platform-info.sh' would also be very helpful.
... View more
11-29-2017
03:12 PM
Sorry about the pain. Feel free to share comments and suggestions as you use the tool.
... View more
11-28-2017
06:41 PM
Did you change the profile duration of the Profiler? By default, it is 15 minutes, but in the link that you sent it tells you to change the duration to 1 minute. I would guess that your Profiler configuration probably looks like this. profiler.period.duration=1
profiler.period.duration.units=MINUTES If so, you also need to change the profiler duration on the client side to match. You can do this in a couple different ways. For example, change the client-side equivalent settings in global settings first, then fetch the data. %define profiler.client.period.duration := "1"
%define profiler.client.period.duration.units := "MINUTES"
PROFILE_GET("url-length", "127.0.0.1", PROFILE_FIXED(5,"HOURS"))
Or simply override those values when fetching the data. PROFILE_GET("url-length", "127.0.0.1", PROFILE_FIXED(5,"HOURS", {'profiler.client.period.duration' : '1', 'profiler.client.period.duration.units' : 'MINUTES'})
... View more
10-30-2017
08:13 PM
Following up on what @jsirota mentioned, the Metron Docker infrastructure is only useful for Metron developers. It will not be useful for you to explore Metron and its use cases. Use "Full Dev" by following these instructions: https://metron.apache.org/current-book/metron-deployment/vagrant/full-dev-platform/index.html
... View more
10-30-2017
08:08 PM
1 Kudo
> The pcap data stored in HDFS is sequence files. How do you view them in Wireshark? My guess would be somehow get the pcap_inspector service to spit out the result of the filter in PCAP format? @Arian Trayen As @cstella mentioned, "pcap_query" does exactly that. It will output a libpcap-compliant file that you can open with Wireshark.
... View more
10-04-2017
10:10 PM
> 1. How do I recognize pcap metadata in Elasticsearch indexes (only see yaf, snort, bro, and squid)? There is not a separate index specifically for pcap metadata. I am just saying that the metadata that you are looking for is likely already provided by an existing sensor like Bro or YAF. For example, what to know who your top talkers are? Any flow-level telemetry, like YAF, will answer that question. What metadata are you looking for specifically?
... View more
10-04-2017
05:52 PM
(1) Would it be possible to extract metadata fields from pcap files and index them into ElasticSearch with Metron? Yes, that is effectively what Metron does when it ingests Bro and YAF telemetry. We let those external tools, tools that are best-in-class at extracting metadata from raw pcap, do the extraction. Metron then consumes that metadata, enriches it, triages it, and indexes it in a search index like Elasticsearch. So your metadata ends up in Elasticsearch, which I think is your end goal here. (2) What's the technical limitation? The PCAP Panel was a custom extension of an old, forked version of Kibana, as I remember it. It was not something we were able to just carry forward without a major overhaul.
... View more
10-04-2017
02:19 PM
> 1. How to get the pcap data collected/stored in HDFS indexed into ElasticSearch? In Apache Metron there is not a mechanism to ingest raw pcap data into Elasticsearch. I have found a search index like Elasticsearch more useful for higher level meta information like flows. There is a tool called Pcap Query to search and retrieve slices of the raw pcap stored in HDFS. This queries against the data stored in HDFS and returns a libpcap-compliant file containing the raw pcap data that you can then load into 3rd party tools like Wireshark.
> 2. How to get the pcap panel on Metron dashboard like the old version of Metron? The Pcap Panel from the original OpenSOC project was not carried forward due to technical limitations.
... View more
09-16-2017
07:19 PM
Yes, we do that a lot for testing. First, use a tool like 'tcpreplay' to replay a pcap file to a network interface. There is even a simple tool in Metron (https://github.com/apache/metron/tree/master/metron-deployment/roles/pcap_replay) that effectively wraps 'tcpreplay' to make it easy to replay packet captures to a virtual network interface. Then use 'pycapa' in producer mode to sniff the packets from that network interface and land them in Kafka.
... View more
09-06-2017
06:23 PM
If you installed Metron with the Ambari MPack, then Elasticsearch should be just another service in Ambari. A service that can be started and stopped like any other using the Start/Stop GUI buttons in the admin interface.
... View more
08-31-2017
06:30 PM
1 Kudo
"Quick Dev" is not actively maintained currently. Please use the "Full Dev" environment in metron-deployment/vagrant/full-dev-environment.
... View more
08-23-2017
06:36 PM
It sounds like this is a different problem. Please submit it as a different question, instead of tagging onto this one. Thanks!
... View more
08-22-2017
06:45 PM
1 Kudo
This is a known bug that has been fixed in master. Here are pointers to more information. Sorry that you ran into this problem. METRON-1026 PR #643
... View more
07-24-2017
07:48 PM
I would highly suggest that you use the "Full Dev" environment. Using `vagrant destroy` will just delete any half-completed VM that you may have created when running into these issues. That will allow you to start fresh. Otto is also right too; don't sudo. cd metron-deployment/vagrant/full-dev-platform
vagrant destroy
vagrant up
... View more
07-12-2017
02:02 PM
Have you been able to follow the "Getting Started" instructions and get that working? http://metron.apache.org/current-book/metron-analytics/metron-profiler/index.html#Getting_Started
... View more
06-06-2017
06:36 PM
Docker is only used for development of components in Metron. It is not a supported deployment platform for Metron. If you just want to take Metron for a spin, I would suggest that you try either a VM or an AWS cluster.
... View more
05-30-2017
02:21 PM
1 Kudo
Install the Metron Bro plugin into your Bro install. This will push the Bro output into Kafka so that Metron can consume it. https://github.com/apache/metron/tree/master/metron-sensors/bro-plugin-kafka You can use the Ansible deployment steps as instructions for one, simple way to pipe YAF and Snort output into Kafka. This is only suitable for small scale testing. https://github.com/apache/metron/tree/master/metron-deployment/roles/yaf https://github.com/apache/metron/tree/master/metron-deployment/roles/snort You're going to want to use something `yafzcbalance` for scaling YAF to higher throughput. https://tools.netsa.cert.org/yaf/yafzcbalance.html You can use Bro's load balancing mechanism to scale it to higher throughput. https://www.bro.org/documentation/load-balancing.html
... View more
05-01-2017
12:35 PM
The Profiler does not get installed by default with the Ambari MPack. Fortunately, you did build all of the Metron RPMs in preparation for the install that you just completed. Among those RPMs you should find one for the Profiler. It should look something like: metron-profiler-<version>*.rpm [root@node1 ~]# rpm -ivh metron-*.rpm
Preparing... ########################################### [100%]
1:metron-solr ########################################### [ 11%]
2:metron-profiler ########################################### [ 22%]
3:metron-pcap ########################################### [ 33%]
4:metron-parsers ########################################### [ 44%]
5:metron-indexing ########################################### [ 56%]
6:metron-enrichment ########################################### [ 67%]
7:metron-elasticsearch ########################################### [ 78%]
8:metron-data-management ########################################### [ 89%]
9:metron-common ########################################### [100%] Simply install this RPM like any other. Then follow the setup instructions in the link that you provided above.
... View more
04-18-2017
05:59 PM
Both the "Quick Dev" and "Full Dev" environments deploy Bro and Snort as sources of telemetry. You should see this telemetry being captured by Metron when you launch either of those platforms. Obviously, running all of the tools for Metron on a single VM is extremely limiting and is intended only for development purposes. You need to run the VM on a host with between 8 and 16 GB of RAM, at least, and also have some patience. This is a very different experience from running Metron on a real cluster.
... View more
03-30-2017
07:59 PM
2 Kudos
You need to tell Elasticsearch to treat that field as a date. Once Elasticsearch knows that it is a date, then Kibana will display it properly. The Elasticsearch template for Bro that is shipped with Metron can be used as a guide. The template already handles this situation [1]. Either directly install that template or create your own template using Metron's as a guide. You can either define it specifically for one field, like this. Or specify multiple fields that should be treated as dates, like this. Also, note that the change will only take effect after the index rolls. if the indices roll every hour, then you need to wait until the next hour to see the change. Or if your data is disposable, just delete the index and see your change take effect immediately.
... View more