I have read that Metron provides ML capability, but I can't find any in the code. I have also heard that ML must be implemented by user supplied code called through the M-a-a-S API calls in Stellar (profiling). I have a client that wishes to implement ML on security telemetry, and I need a definitive answer. Thanks.
In short, Metron does not play a role in providing, training or deploying a model. Once a model is operationalized through some API that API can be hosted on YARN and registered on Metron. Calling that API is then facilitated through usage of Stellar functions which integrate the model on Metrons Storm topologies (enrichment and/or threatintel). So it is not all of the ML work but a major part of operationalizing the model.
I have some questions regarding the profiler. How can the output of the PROFILER_GET function be used by other distributed processing applications? We would like to visualize features computed by the profiler and aggregated features computed by our models. Therefore, we need the output of the profiler as input for Pyspark in which we implemented our models. An other option would be to implement our pipeline completely in Pyspark, but then we don't have access to the already implemented windowing features of the profiler and since the profiler is a build-in functionality for Metron, we are not sure about the downsides about not using the profiler at all. What would be possible downsides if we implemented our pipeline completely in Pyspark? And even without Pyspark, how is it possible to visualize the profiler data and baselines in Zeppelin?
We have more questions about the profiler, these are our critical ones regarding the profiler.