Member since
03-23-2016
56
Posts
20
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2359 | 03-16-2018 01:47 PM | |
1708 | 11-28-2017 06:41 PM | |
6408 | 10-04-2017 02:19 PM | |
1758 | 09-16-2017 07:19 PM | |
4856 | 01-03-2017 05:52 PM |
07-09-2018
07:14 PM
You have only configured the plugin to push HTTP logs to Kafka; not Conn logs. If you expect to push the Conn logs, then configure those to be sent like Example 3 in the README. Or just start with a simpler configuration like this, which will send only the Conn logs. @load packages/metron-bro-plugin-kafka/Apache/Kafka
redef Kafka::logs_to_send = set(Conn::LOG);
redef Kafka::topic_name = "bro";
redef Kafka::kafka_conf = table(
["metadata.broker.list"] = "kafkaip:6667")
);
... View more
03-21-2018
08:51 PM
Did my answer help? If so, please mark it so.
... View more
03-16-2018
01:47 PM
Hi Anil - One problem here is that a failed assignment expression in the REPL does not provide a helpful error message. I submitted a fix for this here https://github.com/apache/metron/pull/966. To work around that in the REPL, you can just do something like the following to test your Profiler definition; basically don't use assignment. [Stellar]>>> conf := SHELL_EDIT(conf)
{
"profiles":[
{
"profile":"demo_iplogon_failed",
"foreach":"ip_address",
"onlyif":"source.type == 'demo_windowsnxlog' and event_id == 4625",
"init":{
"count":"0"
},
"update":{
"count":"count + 1"
},
"result":{
"profile":"count",
"triage":{
"logon_failed_count":"count"
}
}
}
]
}
[Stellar]>>>
[Stellar]>>> PROFILER_INIT(conf)
The issue with the profile definition, is that you don't have a 'result/profile' expression. The 'result/profile' expression which persists the data in HBase is required. Just add one like so below. [Stellar]>>> conf
{
"profiles":[
{
"profile":"demo_iplogon_failed",
"foreach":"ip_address",
"onlyif":"source.type == 'demo_windowsnxlog' and event_id == 4625",
"init":{
"count":"0"
},
"update":{
"count":"count + 1"
},
"result":{
"profile":"count",
"triage":{
"logon_failed_count":"count"
}
}
}
]
}
[Stellar]>>> PROFILER_INIT(conf)
Profiler{1 profile(s), 0 messages(s), 0 route(s)}
... View more
11-29-2017
03:12 PM
Sorry about the pain. Feel free to share comments and suggestions as you use the tool.
... View more
11-28-2017
06:41 PM
Did you change the profile duration of the Profiler? By default, it is 15 minutes, but in the link that you sent it tells you to change the duration to 1 minute. I would guess that your Profiler configuration probably looks like this. profiler.period.duration=1
profiler.period.duration.units=MINUTES If so, you also need to change the profiler duration on the client side to match. You can do this in a couple different ways. For example, change the client-side equivalent settings in global settings first, then fetch the data. %define profiler.client.period.duration := "1"
%define profiler.client.period.duration.units := "MINUTES"
PROFILE_GET("url-length", "127.0.0.1", PROFILE_FIXED(5,"HOURS"))
Or simply override those values when fetching the data. PROFILE_GET("url-length", "127.0.0.1", PROFILE_FIXED(5,"HOURS", {'profiler.client.period.duration' : '1', 'profiler.client.period.duration.units' : 'MINUTES'})
... View more
10-30-2017
08:08 PM
1 Kudo
> The pcap data stored in HDFS is sequence files. How do you view them in Wireshark? My guess would be somehow get the pcap_inspector service to spit out the result of the filter in PCAP format? @Arian Trayen As @cstella mentioned, "pcap_query" does exactly that. It will output a libpcap-compliant file that you can open with Wireshark.
... View more
10-04-2017
10:10 PM
> 1. How do I recognize pcap metadata in Elasticsearch indexes (only see yaf, snort, bro, and squid)? There is not a separate index specifically for pcap metadata. I am just saying that the metadata that you are looking for is likely already provided by an existing sensor like Bro or YAF. For example, what to know who your top talkers are? Any flow-level telemetry, like YAF, will answer that question. What metadata are you looking for specifically?
... View more
10-04-2017
05:52 PM
(1) Would it be possible to extract metadata fields from pcap files and index them into ElasticSearch with Metron? Yes, that is effectively what Metron does when it ingests Bro and YAF telemetry. We let those external tools, tools that are best-in-class at extracting metadata from raw pcap, do the extraction. Metron then consumes that metadata, enriches it, triages it, and indexes it in a search index like Elasticsearch. So your metadata ends up in Elasticsearch, which I think is your end goal here. (2) What's the technical limitation? The PCAP Panel was a custom extension of an old, forked version of Kibana, as I remember it. It was not something we were able to just carry forward without a major overhaul.
... View more
10-04-2017
02:19 PM
> 1. How to get the pcap data collected/stored in HDFS indexed into ElasticSearch? In Apache Metron there is not a mechanism to ingest raw pcap data into Elasticsearch. I have found a search index like Elasticsearch more useful for higher level meta information like flows. There is a tool called Pcap Query to search and retrieve slices of the raw pcap stored in HDFS. This queries against the data stored in HDFS and returns a libpcap-compliant file containing the raw pcap data that you can then load into 3rd party tools like Wireshark.
> 2. How to get the pcap panel on Metron dashboard like the old version of Metron? The Pcap Panel from the original OpenSOC project was not carried forward due to technical limitations.
... View more
09-16-2017
07:19 PM
Yes, we do that a lot for testing. First, use a tool like 'tcpreplay' to replay a pcap file to a network interface. There is even a simple tool in Metron (https://github.com/apache/metron/tree/master/metron-deployment/roles/pcap_replay) that effectively wraps 'tcpreplay' to make it easy to replay packet captures to a virtual network interface. Then use 'pycapa' in producer mode to sniff the packets from that network interface and land them in Kafka.
... View more