Member since
07-14-2016
215
Posts
45
Kudos Received
16
Solutions
05-24-2018
09:28 AM
1 Kudo
About this article
The Metron tutorial article for adding Squid telemetry walks through the process of creating the parser from scratch for Elasticsearch as the Indexing service.
This article gives details of extending the tutorial for getting Squid telemetry working with Solr as the backend Indexing service.
In other words, these steps are an equivalent of "Installing Squid parser template" for the Elasticsearch.
Pre-requisites
HCP >= 1.5.0.0
HDP search >= 3.0.0
It is assumed that you have deployed a HCP stack with Solr by following the HCP documentation
The Solr node is co-located with the Metron node.
In the event that these nodes are on different hosts, ensure that you copy the Metron schema files located at $METRON_HOME/config/schema to the Solr node.
It is also assumed that you have followed the Metron tutorial for Squid telemtry by installing the squid sensor, creating the kafka topic and have started the storm topology
Steps
1. SSH to the Metron host and run the following commands
cd $METRON_HOME/config/schema
mkdir squid
cd squid
Copy the attached files (schema.xml and solrconfig,xml) into the 'squid' folder created above.
2. Run the following commands on the Metron host to create a Solr collection for Squid
export SOLR_HOME=/opt/lucidworks-hdpsearch/solr/
export SOLR_USER=solr
su $SOLR_USER -c "$SOLR_HOME/bin/solr create -c squid -d $METRON_HOME/config/schema/squid/"
3. Go to the Solr UI at http://<solr-host>:8983/solr/#/~collections to confirm that the Squid collection is present
4. Ingest events into the 'squid' kafka topic and you should see documents being written into the Squid collection in Solr.
5. Fire up Alerts UI and verify that Squid events are seen.
... View more
Labels:
05-24-2018
09:14 AM
Did you restart the Indexing topology after the change? Note that you might have to restart from command line, since Ambari will not allow stopping of an (already) stopped service. Can you post the version of python-requests you are running?
... View more
05-10-2018
10:44 AM
2 Kudos
Problem
On some of the HCP deployments, the Indexing topology might show up as "stopped, while the actual topology might be running (when you check in Storm UI).
Additionally, one might also see the following kind of error messages in ambari-agent,log.
INFO 2018-02-05 22:21:39,990 PythonReflectiveExecutor.py:67 - Reflective command failed with exception:
Traceback (most recent call last):
File "/usr/lib/python2.6/site-packages/ambari_agent/PythonReflectiveExecutor.py", line 59, in run_file
imp.load_source('__main__', script)
File "/var/lib/ambari-agent/cache/common-services/METRON/0.4.3/package/scripts/indexing_master.py", line 18, in <module>
import requests
File "/usr/lib/python2.6/site-packages/requests/__init__.py", line 53, in <module>
from .packages.urllib3.contrib import pyopenssl
File "/usr/lib/python2.6/site-packages/requests/packages/__init__.py", line 61, in load_module
if name in sys.modules:
AttributeError: 'NoneType' object has no attribute 'modules'
Reasoning
If the above symptoms are true, you are most likely seeing METRON-1451
Solution
Install version 2.61. of python-requests on the server pip install requests==2.6.1 Restart the Indexing topology to resolve the issue.
... View more
Labels:
11-28-2017
08:55 AM
2 Kudos
This article serves as an addendum to the main Metron MaaS
README doc in Apache Metron github. It is highly recommended that you go through the README article in github to understand the concepts and working principle. This article only intends to capture the steps specific to the Metron full dev vagrant platform so it is easy for a user to copy-paste-run and get it working quickly. Further, this article only covers the successful startup, deployment and validation of the Metron MaaS service. Refer to the master github README for further steps. Prerequisites
* You need to have a working Metron full dev platform before you proceed with the instructions Step 1:Install Required Packages
Run the following commands to install Flask, Jinja2, Squid client and the Elasticsearch HEAD plugin:
vagrant ssh #To SSH onto the full-dev platform
sudo yum install python-flask
sudo yum install python-jinja2
sudo yum install squid
sudo service start squid
sudo /usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
Step 2: Create Mock DGA service files
Run the following commands:
sudo su - metron
mkdir mock_dga
cd mock_dga
Download the files from this
link and copy to the folder. Alternativey you use the following commands to create the files:
* vi dga.py
(paste the below code snippet, save and quit)
from flask import Flask
from flask import request,jsonify
import socket
app = Flask(__name__)
@app.route("/apply", methods=['GET'])
def predict():
h = request.args.get('host')
r = {}
if h == 'yahoo.com' or h == 'amazon.com':
r['is_malicious'] = 'legit'
else:
r['is_malicious'] = 'malicious'
return jsonify(r)
if __name__ == "__main__":
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
sock.bind(('localhost', 0))
port = sock.getsockname()[1]
sock.close()
with open("endpoint.dat", "w") as text_file:
text_file.write("{\"url\" : \"http://0.0.0.0:%d\"}" % port)
app.run(threaded=True, host="0.0.0.0", port=port)
* vi rest.sh
(paste the below code snippet, save and quit)
#!/bin/bash
python dga.py
Run these commands to make the files executable
chmod +x /home/metron/mock_dga/*
Step 3: Create HDFS directories
Run the following commands as
vagrant user, and _not_ as metron user
sudo su - hdfs -c "hadoop fs -mkdir /user/metron"
sudo su - hdfs -c "hadoop fs -chown metron:metron /user/metron"<br>
Step 4: Start MaaS service
Run the following commands:
Note: Change the METRON_HOME variable per the version of Metron you are running
sudo su - metron
export METRON_HOME=/usr/metron/0.4.2
$METRON_HOME/bin/maas_service.sh -zq node1:2181
Verify MaaS service running and view application log
Follow these steps to ensure that the maas service is running properly
1. Launch Ambari UI at http://node1:8080. Authenticate with admin/admin
2. Go to Services -> YARN -> 'Quick Links' dropdown -> ResourceManager UI
3. You should be able to see the application listed in the UI, similar to the below:
4. Click on the application -> Logs -> AppMaster.stderr log file to view the startup logs. Check for presence of any errors. If there are none, you are good to deploy the DGA model in the next step. Step 5: Deploy Mock DGA model
Run the following command as metron user to deploy the DGA model
$METRON_HOME/bin/maas_deploy.sh -zq node1:2181 -lmp /home/metron/mock_dga -hmp /user/metron/models -mo ADD -m 512 -n dga -v 1.0 -ni 1
Once the command completes, you can monitor the ResourceManager UI application logs to check for any errors. Verify DGA model has been successfully deployed
a) Run the following command as metron user:
$METRON_HOME/bin/maas_deploy.sh -zq node1:2181 -mo LIST
At the end of the command execution, you should be able to see something similar to the following output, which indicates that the model has been successfully deployed. Model dga @ 1.0
dga:1.0 @ http://node1:50451 serving:
apply=apply
Note: The port number '50451' in the above output may change across different runs.
b) Try to hit the model via curl by running the following commands, and verify you are seeing the respective outputs. [metron@node1 ~]$ curl 'http://localhost:50451/apply?host=testing.com'
{
"is_malicious": "malicious"
}
[metron@node1 ~]$ curl 'http://localhost:50451/apply?host=yahoo.com'
{
"is_malicious": "legit"
}
With this you would have been able to successfully started, deployed and validated Metron MaaS on your full dev Metron platform. Step 6: Squid Example The next steps of sending data through the squid sensor and having it processed through the MaaS is not covered as a part of this article. Please refer to the steps listed in the github README doc.
... View more
Labels:
08-22-2017
08:48 AM
@ankur V and @leo lee - looks like you are hitting into https://issues.apache.org/jira/browse/METRON-1026. This has been fixed with latest bits of metron. Can you give it a try?
... View more
04-26-2017
12:26 PM
Thank you @Simon Elliston Ball, I have made some more edits as well to the screenshots.
... View more
04-26-2017
12:26 PM
4 Kudos
Note: This article is an extension of the instructions at this link. This article uses the example of a 12-node CentOS 7 VM cluster on Openstack environment and with HDP 2.5 as base stack for Metron ver > 0.3.x (or HDP 2.4 for Metron < 0.2.x). Please be aware that Metron deployment using Ambari management packs is actively being enhanced and worked upon. Some of the steps in this article and/or the behavior might be altered or become obsolete. Refer to this link for the current set of limitations with the Ambari Mpack installation and setup. The selection of nodes, services, slaves and clients in the Ambari cluster wizard are an indicative example. Optimization for performance and scale requirements are out of scope of this article. Prerequisites For Generating RPMs and Management Pack Local system (Mac or Linux) with build tools installed viz. Maven and docker. Refer here for more details on the tool pre-requisites. Docker service has been installed and is running. A cloned copy of the Metron git repo. For Metron Cluster deployment 12 VMs running CentOS 7.x. JDK > 1.8.x installed on all nodes. Ambari Server > 2.4.2.x required Step 1 - Build Metron RPMs and Management Pack In this step, you will create the RPMs and Ambari management pack tarball on your local system. The RPMs and Ambari mpack need to be SCP'ed into the cluster hosts a) Build Mpack On your local system where you have cloned the Git repo, run the following command: cd incubator-metron
mvn clean package -DskipTests Once the above command has run successfully, the metron management pack will be generated in the path at: incubator-metron/metron-deployment/packaging/ambari/metron-mpack/target/metron_mpack-xxxx.tar.gz b) Build RPMs Run the following commands to generate the RPMs: Note: Docker service should be running in order for the below command to work properly. cd incubator-metron/metron-deployment/
mvn clean package -Pbuild-rpms -DskipTests The above command will build the RPMs and create them under incubator-metron/metron-deployment/packaging/docker/rpm-docker/RPMS/noarch c) Copy RPMs and Mpack to cluster nodes Use either SCP or your favorite file transfer application to copy the mpack and RPMs over the to the cluster Copy the metron_mpack-xxxx.tar.gz file to the node which is going to run Ambari Server (For E.g. node #1) On the node where you would like to install Metron (For e.g. node 12), create a directory call /localrepo and scp all of the generated metron rpms to the /localrepo folder Step 2 - Install Ambari Server and Metron management pack SSH to the node where Ambari server needs to be installed (node #1) and follow these steps. Ensure CentOS repo for Ambari server is updated: wget -nv http://public-repo-1.hortonworks.com/ambari/centos7/2.x/updates/2.4.2.0/ambari.repo -O /etc/yum.repos.d/ambari.repo Install Ambari server, install mpack and start service yum install ambari-server -y
ambari-server setup -s
ambari-server install-mpack --mpack=/path/to/metron_mpack-1.0.0.0-SNAPSHOT.tar.gz --verbose
ambari-server start Step 3 - Cluster Installation Point your browser to http://<node#1>:8080 to start the cluster installation wizard. Follow the screenshots below to complete the Metron installation. Start the wizard by choosing the "Create a Cluster" option and specify a name for the cluster. Select HDP 2.5 stack for Metron 0.3.x from the version selection page. Specify the list of hosts in your cluster along with the connection information (e.g. private key) In the "Choose Services" page, select the following services -- which are required as a minimum for a working Metron deployment. You may select other additional services as per your need. HDFS YARN + MapReduce2 HBase HIVE Pig Zookeeper Storm Spark Kafka Zeppelin Elasticsearch Kibana Metron Slider Here are sample screenshots: In the "Assign Masters" page, ensure the following criteria is met. Observe that the Ambari wizard will display warning / error popups when the below criteria is not met. a) Ensure that the Kibana Server, Metron Enrichment, Metron Indexing and Metron Parsers components are all assigned to the _same node_. It is important to note that these _should not_ be on the same node as Ambari server. All of these, for e.g. could reside on the node #12 b) It is preferred to run the Elasticsearch Master on the same node as Metron components (node #12, in this example). c) Add up to 4 Kafka Broker components. Ensure that one of the Kafka Broker components is installed on the Metron node #12. d) [Optional] You may retain the Zookeeper server on only one host and remove the rest The other components may be left at their defaults Note: For some of the components, there is a warning message to remind the user for client selection. Choose "Continue Anyway" if the Validation Issues warning dialog pops up. In the "Assign Slaves and Clients" window, you need to ensure that the Metron node (#12 in this example) is selected for the following: DataNode NodeManager RegionServer Supervisor Flume ElasticsearchDataNode Client Optionally, you can choose to install "Client" on all the nodes. In the "Customize Services" window, refer to the screenshots below for filling in the respective tabs 1. Change NameNode heap size from the default 1024 MB to about 4096 MB 2. Under Elasticsearch -> Advanced elastic-site -> zen_discovery_ping_unicast_hosts, specify the location where Elasticsearch master is installed (E.g. node #12 in this case. 3. Under Kibana -> Advanced kibana-env, specify the kibana_es_url to the Elasticsearch master node URL with port as 9200. 4. Under Metron, change the parameters in the respective tabs below. Tab #1 - Default Settings for Metron services. Provide details of the Elasticsearch hosts. Tab #2 - Repository settings - Remote vs. Local repository If you choose to install using a Local repository, ensure that you have copied the Metron RPMs into the /localrepo folder in the Metron node. If you choose to install using a remote repository, specify the URL where the repo file is available. Hit the "Next" button and then hit the "Deploy" button to proceed with Metron deployment. This will start the cluster deployment and in time, all the services should be up and running.
... View more
Labels:
03-09-2017
01:34 PM
1 Kudo
Hi @Michael Miklavcic, thanks for the article.
On my Ubuntu 14.04 openstack cluster, I was unable to start elasticsearch service after following the steps. It was failing with and error saying NoSuchFileException: /usr/share/elasticsearch/config . I had to follow the workaround in this article in order to have the services started successfully. I did not have any issues with the Kibana install. It worked fine.
... View more
10-10-2016
08:09 AM
1 Kudo
This is a well written article, very useful indeed. Thank you, @Michael Young!
... View more
10-04-2016
09:32 AM
4 Kudos
Pre-requisite
Working Metron cluster - deployed via ansible-playbook or via Ambari + Mpack.
The node on which opentaxii service is being deployed should have access to HBASE. Step 1 - Deploy Opentaxii Role (Optional - if not deployed)
a) Create a playbook to deploy the opentaxii role
[root@metron-test ~]# cat metron/metron-deployment/playbooks/install-opentaxii.yml
- hosts: metron
become: true
roles:
- role: opentaxii
b) Deploy using ansible-playbook
[root@metron-test ~]# ansible-playbook -i ~/metron-deployment/inventory/metron_example playbooks/install-opentaxii.yml -e ansible_python_interpreter=python -e ansible_user=root -e ansible_ssh_private_key_file=/path/to/private-keypair.pem -vvv
c) Verify the service has been deployed successfully using the command:
service opentaxii status
This should show the list of subscribed services along with threat feed counts. Here is a sample output:
[root@metron-test]# service opentaxii status
guest.phishtank_com 888
guest.Abuse_ch 0
guest.CyberCrime_Tracker 0
guest.EmergingThreats_rules 0
guest.Lehigh_edu 0
guest.MalwareDomainList_Hostlist 0
guest.blutmagie_de_torExits 648
guest.dataForLast_7daysOnly 1124
guest.dshield_BlockList 0
Note:
In case the following is noticed
[root@node1 ~]# service opentaxii status
Checking opentaxii... Running
Services not defined
Refer to
METRON-484 for more details and a workaround. Step 2 - Fetch Latest Opentaxii Feeds
Use the following command to fetch the latest hailataxii feeds into the opentaxii server
service opentaxii sync <service-name> [YYYY-MM-DD]
For e.g.
service opentaxii sync guest.phishtank_com
service opentaxii sync guest.Abuse_ch 2016-08-01
Note: The date (YYYY-MM-DD) indicates the time from when the threat intel feeds is to be pulled. If not suffixed, then the sync command picks up feeds available for the current day.
The above process can be repeated for all the subscribed services. Step 3 - Load Opentaxii Feeds into HBASE
Create sample extractor.json and connection_config.json files as follows:
[root@metron-test]# cat ~/extractor.json
{
"config": {
"columns": {
"ip": 0
},
"indicator_column": "ip",
"type" : "malicious_ip",
"separator" : ","
},
"extractor" : "STIX"
}
[root@metron-test]# cat ~/connection_config.json
{
"endpoint" : "http://localhost:9000/services/discovery"
,"username" : "guest"
,"password" : "guest"
,"type" : "DISCOVER"
,"collection" : "guest.MalwareDomainList_Hostlist"
,"table" : "threatintel"
,"columnFamily" : "t"
,"allowedIndicatorTypes" : [ "domainname:FQDN", "address:IPV_4_ADDR" ]
}
Now, push the hailataxii feeds from the opentaxii server into HBASE using the following script:
/usr/metron/<METRON_VERSION>/bin/threatintel_taxii_load.sh -b <START_TIME> -c /path/to/connection_config.json -e /path/to/extractor.json -p <TIME_INTERVAL_MSECS>
For e.g.
/usr/metron/0.2.0BETA/bin/threatintel_taxii_load.sh -b "2016-08-01 00:00:00" -c ~/connection_config.json -e ~/extractor.json -p 10000
Step 4 - Verify in HBASE
Query the Hbase table to check for the threat intel feeds.
echo "scan 'threatintel'" | hbase shell
... View more
Labels: