Member since
07-14-2016
215
Posts
45
Kudos Received
16
Solutions
05-24-2018
09:28 AM
1 Kudo
About this article
The Metron tutorial article for adding Squid telemetry walks through the process of creating the parser from scratch for Elasticsearch as the Indexing service.
This article gives details of extending the tutorial for getting Squid telemetry working with Solr as the backend Indexing service.
In other words, these steps are an equivalent of "Installing Squid parser template" for the Elasticsearch.
Pre-requisites
HCP >= 1.5.0.0
HDP search >= 3.0.0
It is assumed that you have deployed a HCP stack with Solr by following the HCP documentation
The Solr node is co-located with the Metron node.
In the event that these nodes are on different hosts, ensure that you copy the Metron schema files located at $METRON_HOME/config/schema to the Solr node.
It is also assumed that you have followed the Metron tutorial for Squid telemtry by installing the squid sensor, creating the kafka topic and have started the storm topology
Steps
1. SSH to the Metron host and run the following commands
cd $METRON_HOME/config/schema
mkdir squid
cd squid
Copy the attached files (schema.xml and solrconfig,xml) into the 'squid' folder created above.
2. Run the following commands on the Metron host to create a Solr collection for Squid
export SOLR_HOME=/opt/lucidworks-hdpsearch/solr/
export SOLR_USER=solr
su $SOLR_USER -c "$SOLR_HOME/bin/solr create -c squid -d $METRON_HOME/config/schema/squid/"
3. Go to the Solr UI at http://<solr-host>:8983/solr/#/~collections to confirm that the Squid collection is present
4. Ingest events into the 'squid' kafka topic and you should see documents being written into the Squid collection in Solr.
5. Fire up Alerts UI and verify that Squid events are seen.
... View more
Labels:
06-27-2018
10:11 PM
I encountered similar problem with EL7 and python requests=2.6.0, fixed as below: [root@XXXX ~]# yum remove python-requests-2.6.0-1.el7_1.noarch
Loaded plugins: langpacks, product-id, search-disabled-repos
Repository HDP-UTILS-1.1.0.21 is listed more than once in the configuration
Resolving Dependencies
Installed size: 774 k
Is this ok [y/N]: y
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
Erasing : insights-client-3.0.3-9.el7_5.noarch 1/2
warning: /etc/insights-client/insights-client.conf saved as /etc/insights-client/insights-client.conf.rpmsave
Erasing : python-requests-2.6.0-1.el7_1.noarch 2/2
Verifying : python-requests-2.6.0-1.el7_1.noarch 1/2
Verifying : insights-client-3.0.3-9.el7_5.noarch 2/2
Removed:
python-requests.noarch 0:2.6.0-1.el7_1
Dependency Removed:
insights-client.noarch 0:3.0.3-9.el7_5
Complete!
[root@XXXX ~]# pip install requests
Collecting requests
Downloading https://files.pythonhosted.org/packages/65/47/7e02164a2a3db50ed6d8a6ab1d6d60b69c4c3fdf57a284257925dfc12bda/requests-2.19.1-py2.py3-none-any.whl (91kB)
100% |████████████████████████████████| 92kB 5.2MB/s
Requirement already satisfied: idna<2.8,>=2.5 in /usr/lib/python2.7/site-packages (from requests) (2.6)
Requirement already satisfied: chardet<3.1.0,>=3.0.2 in /usr/lib/python2.7/site-packages (from requests) (3.0.4)
Requirement already satisfied: urllib3<1.24,>=1.21.1 in /usr/lib/python2.7/site-packages (from requests) (1.22)
Requirement already satisfied: certifi>=2017.4.17 in /usr/lib/python2.7/site-packages (from requests) (2018.4.16)
Installing collected packages: requests
Successfully installed requests-2.19.1
... View more
11-28-2017
08:55 AM
2 Kudos
This article serves as an addendum to the main Metron MaaS
README doc in Apache Metron github. It is highly recommended that you go through the README article in github to understand the concepts and working principle. This article only intends to capture the steps specific to the Metron full dev vagrant platform so it is easy for a user to copy-paste-run and get it working quickly. Further, this article only covers the successful startup, deployment and validation of the Metron MaaS service. Refer to the master github README for further steps. Prerequisites
* You need to have a working Metron full dev platform before you proceed with the instructions Step 1:Install Required Packages
Run the following commands to install Flask, Jinja2, Squid client and the Elasticsearch HEAD plugin:
vagrant ssh #To SSH onto the full-dev platform
sudo yum install python-flask
sudo yum install python-jinja2
sudo yum install squid
sudo service start squid
sudo /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
Step 2: Create Mock DGA service files
Run the following commands:
sudo su - metron
mkdir mock_dga
cd mock_dga
Download the files from this
link and copy to the folder. Alternativey you use the following commands to create the files:
* vi dga.py
(paste the below code snippet, save and quit)
from flask import Flask
from flask import request,jsonify
import socket
app = Flask(__name__)
@app.route("/apply", methods=['GET'])
def predict():
h = request.args.get('host')
r = {}
if h == 'yahoo.com' or h == 'amazon.com':
r['is_malicious'] = 'legit'
else:
r['is_malicious'] = 'malicious'
return jsonify(r)
if __name__ == "__main__":
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(('localhost', 0))
port = sock.getsockname()[1]
sock.close()
with open("endpoint.dat", "w") as text_file:
text_file.write("{\"url\" : \"http://0.0.0.0:%d\"}" % port)
app.run(threaded=True, host="0.0.0.0", port=port)
* vi rest.sh
(paste the below code snippet, save and quit)
#!/bin/bash
python dga.py
Run these commands to make the files executable
chmod +x /home/metron/mock_dga/*
Step 3: Create HDFS directories
Run the following commands as
vagrant user, and _not_ as metron user
sudo su - hdfs -c "hadoop fs -mkdir /user/metron"
sudo su - hdfs -c "hadoop fs -chown metron:metron /user/metron"<br>
Step 4: Start MaaS service
Run the following commands:
Note: Change the METRON_HOME variable per the version of Metron you are running
sudo su - metron
export METRON_HOME=/usr/metron/0.4.2
$METRON_HOME/bin/maas_service.sh -zq node1:2181
Verify MaaS service running and view application log
Follow these steps to ensure that the maas service is running properly
1. Launch Ambari UI at http://node1:8080. Authenticate with admin/admin
2. Go to Services -> YARN -> 'Quick Links' dropdown -> ResourceManager UI
3. You should be able to see the application listed in the UI, similar to the below:
4. Click on the application -> Logs -> AppMaster.stderr log file to view the startup logs. Check for presence of any errors. If there are none, you are good to deploy the DGA model in the next step. Step 5: Deploy Mock DGA model
Run the following command as metron user to deploy the DGA model
$METRON_HOME/bin/maas_deploy.sh -zq node1:2181 -lmp /home/metron/mock_dga -hmp /user/metron/models -mo ADD -m 512 -n dga -v 1.0 -ni 1
Once the command completes, you can monitor the ResourceManager UI application logs to check for any errors. Verify DGA model has been successfully deployed
a) Run the following command as metron user:
$METRON_HOME/bin/maas_deploy.sh -zq node1:2181 -mo LIST
At the end of the command execution, you should be able to see something similar to the following output, which indicates that the model has been successfully deployed. Model dga @ 1.0
dga:1.0 @ http://node1:50451 serving:
apply=apply
Note: The port number '50451' in the above output may change across different runs.
b) Try to hit the model via curl by running the following commands, and verify you are seeing the respective outputs. [metron@node1 ~]$ curl 'http://localhost:50451/apply?host=testing.com'
{
"is_malicious": "malicious"
}
[metron@node1 ~]$ curl 'http://localhost:50451/apply?host=yahoo.com'
{
"is_malicious": "legit"
}
With this you would have been able to successfully started, deployed and validated Metron MaaS on your full dev Metron platform. Step 6: Squid Example The next steps of sending data through the squid sensor and having it processed through the MaaS is not covered as a part of this article. Please refer to the steps listed in the github README doc.
... View more
Labels:
05-25-2017
01:14 PM
Sir, while folloing the document i stuck over installation process of es,metron. after configuration of metron inside ambari-repository i move for metron and es installation of the nodes as recommended.But get failed with below exception: Ambari-console error: 2017-05-2516:33:01,138-Installingpackage elasticsearch-2.3.3('/usr/bin/yum -d 0 -e 0 -y install elasticsearch-2.3.3') 2017-05-2516:33:02,132-Execution of '/usr/bin/yum -d 0 -e 0 -y install elasticsearch-2.3.3' returned 1.Error:Cannot retrieve repository metadata (repomd.xml)for repository: METRON-0.4.0.Please verify its path andtry again 2017-05-2516:33:02,132-Failed to install package elasticsearch-2.3.3.Executing'/usr/bin/yum clean metadata' 2017-05-2516:33:02,497-Retrying to install package elasticsearch-2.3.3 after 30 seconds Command failed after 1 tries Terminal error: file:///localrepo/repodata/repomd.xml: [Errno 14] Could not open/read file:///localrepo/repodata/repomd.xml Error: Cannot retrieve repository metadata (repomd.xml) for repository: METRON-0.4.0. Please verify its path and try again
... View more
01-12-2017
05:24 AM
However I am indexing is failing in monit, any suggestions for that? thanks!
... View more
11-14-2017
07:26 PM
Thank you for the excellent tutorial. I got the set up working with my taxii server along with threatintel_taxii_load.sh with "./threatintel_taxii_load.sh -b "2017-11-11 00:00:00" -c ~/connection.json -e ~/extractor.json -p 10000" However after the blocks have been processed, they do not seem to be stored into HBase. I also tried creating a "threat_intel" table with column family "t" prior to running "threatintel_taxii_load.sh". hbase(main): 006:0> scan 'threat_intel' ROW COLUMN+CELL 0 row(s) in 0.0220 seconds My connection.json: { "endpoint": "http://localhost:9000/services/discovery", "username": "guest", "password": "guest", "type": "DISCOVER", "collection": "pool", "table": "threat_intel", "columnFamily": "t", "allowedIndicatorTypes": [ ] } My extractor.json: { "config": { "zk_quorum": "node2:2181", "stix_address_categories": "IPV_4_ADDR" }, "extractor": "STIX" } I have a feeling that it may be the StixExtractor.java not being able to extract the indicators (IPs), or perhaps it could be HBase having issues. I'll be trying to load in threat intel from CSV files using the flatfile_loader.sh to check whether HBase gets populated.
... View more