Member since
08-13-2019
37
Posts
26
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
5581 | 12-31-2018 08:44 AM | |
1689 | 12-18-2018 08:39 PM | |
1350 | 08-27-2018 11:29 AM | |
3482 | 10-12-2017 08:35 PM | |
2351 | 08-06-2017 02:57 PM |
01-07-2019
12:27 PM
1 Kudo
Hi @haco fayik That looks great. Sounds like you got around the initial problem of ingesting data into Metron. There could be multiple reasons, e.g. parser, enrichment and indexing topologies not running or being misconfigured. Would you create a new question for this and provide more details, such as worker logs of those topologies? Would you also mark the answer that helped you most solve the ingest problem as "Best Answer"? thanks!
... View more
12-31-2018
12:59 PM
1 Kudo
@haco fayik There's many ways to do this. You should probably search this community in the NiFi section or get familiar with NiFi in general. However, as a a short overview, the most common cases for Metron ingestion, I'm encountering in the field are:
your sources are pushing the message to a syslog server. You can configure your syslog server to push data to your NiFi instance over TCP or UDP. In this case you'd need a "ListenSyslog" processor and a "PublishKafka" processor. you already have a log forwarder capable of pushing data to Kafka (winlogbeats😞 https://www.elastic.co/guide/en/beats/winlogbeat/current/configuring-output.html . In this case you won't need NiFi, if you are comfortable using winlogbeats. You install MiNiFi on all servers to act as a simple log forwarder over tcp. You'd send those packets to a NiFi instance/cluster (similar to the Syslog approach), receive them via "ListenTcp" processor and push your messages into Kafka using the "PublishKafka" processor. You could also send data directly into Kafka from MiNiFi. Note: If your Kafka cluster is secured with Kerberos, this might influence your choice.
... View more
12-31-2018
08:44 AM
1 Kudo
Hi @haco fayik, as a starting point you need to push data into a parser specific Kafka topic (you can call the topic "windows-event-log"), and configure a parser in the Metron Management UI and start it. In the parser configuration you configure Metron, from which Kafka topic the messages are picked up ("windows-event-log" in our case) and how to parse the incoming messages. NiFi is a great tool to collect data from various sources and push it into Kafka. Maybe my article helps you: https://datahovel.com/2018/07/18/how-to-onboard-a-new-data-source-in-apache-metron/ If you have more specific questions, don't hesitate to ask!
... View more
12-18-2018
08:39 PM
Hi @Amirul Yes, you need to create a template. Best way of creating it, is using an existing one of the example parsers that are delivered with HCP/Metron and modifying it to fit your new parser. You'd need at least a section in that template with: "metron_alert" : {
"type" : "nested"
} Here, I've written a small blog post about what you need to take care about, when you create a template: https://datahovel.com/2018/11/27/how-to-define-elastic-search-templates-for-apache-metron/ Here is the official documentation that describes that you need a template: https://metron.apache.org/current-book/metron-platform/metron-elasticsearch/index.html
... View more
08-27-2018
11:29 AM
Hi @Sarvesh Kumar Apache Metron gives you all the tools you need to
extract and parse the information from your event. So if the event's message contains the information about if the device has shutdown, you'll be able to create a rule around it. aggregate data and create profiles of devices in certain time windows. So you could create a small function that evaluates the status of a device in a certain time frame and check if the device is up. Disk memory full: If the event source contains the current disk space (and ideally also sends the maximum amount of disk space available) it's just a simple rule to add to create an alert. Regarding your unsupervised learning question:
Your examples don't require machine learning, because they are rule based. You'd want to use machine learning to train a model that generates alerts based on data rather than on rules. (in most cases this is "supervised" learning based on "is alert" or "is not alert"). However, Metron provides a "Model as a Service" capabilty, which allows you to deploy models to evaluate events and enrich them. That being said, Metron does not provide models for you. Creating features and models is the data scientists job and depending how thoroughly this is done, this will determine how many accurate alerts (ideally all of them) and how many false positives you have (ideally none). Hope that helped!
... View more
07-13-2018
08:10 AM
Summary Using Apache Solr as the indexing and search engine for Metron requires the Metron REST service to perform queries to multple collections. If the Ranger plugin is active there is currently a gotcha ( = Ranger Solr plugin bug). If you don't want to give the Metron user full access to all Solr collections here is a workaround. The Problem 2+ Solr collections that are being queried: metaalert, cef,.... (and other parser collections): 1 user: metron 1 Ranger policy: user: "metron", access type: "Read", "Write", collections: "metaalert", "cef" Query of metaalert collection returns content of metaalert collection as expected and logs event successfully in Ranger audit. curl -k --negotiate -u : "http://solr_url:solr_port/solr/metaalert/search?q=*" Query of cef collection returns content of cef collection as expected and logs it successfully in Ranger audit. curl -k --negotiate -u : "http://solr_url:solr_port/solr/cef/search?q=*" Query of metaalert and cef will return a "403 Unauthorized request". This is what the Metron REST server does: curl -k --negotiate -u : "http://solr_url:solr_port/solr/metaalert/select?q=*&collections=metaalert,cef" In Ranger audit we now see 3 lines:
user: metron, resource: metaalert,cef, Result: Denied user: metron, resource: metaalert, Result: Allowed user: metron, resource: cef, Result: Allowed Expectation would be that query is successfull! Workaround(s) One workaround would be to give metron access to all collections: "*" . We usually don't want that on clusters, that are being used by other use cases. Another workaround would be to give metron access to "*metaalert*" collection.
... View more
Labels:
12-01-2017
09:44 PM
@nshelke Thanks worked fine. Tried to configure it in HA mode analogously to the HIVE service, but it didn't work out. Did you try it in HA mode as well?
... View more
11-12-2017
08:20 PM
@vishwa ".... end end user in hive CLI" That explains your issue 🙂 . You shouldn't try to use Hive CLI to connect to Hiveserver2. You should be using beeline. Hope that solves this issue for you 😉
... View more
11-09-2017
07:57 PM
@vishwa You shouldn't set "hive.server2.enable.doAs=true" in the hive-interactive section. This doesn't make sense from a resources point of view. However you can set it on the main config page. These two settings are independent, even though they have the same name. Either way you can access tables as the user you are authenticated against the Hiveserver and use fine grained authorization on your tables. The only difference is the user of the system processes running the queries. With hive.server2.enable.doAs=true the query runs in a container owned by the authenticated user, while "...=false" runs it as the hive user.
... View more
11-09-2017
08:05 AM
Hi @vishwa LLAP doesn't have any issue with it, it's simply ignored. So you can run your batch Hive instances in RunAs mode, while your Hive interactive (LLAP) server runs your jobs as hive user. Your issue seems to be: "No running LLAP daemons!". In order to run a job, you should first bring up the LLAP daemons cleanly. If that fails have a look at the LLAP daemon logs in YARN and check why those are not coming up or crashing.
... View more