Member since
07-14-2016
215
Posts
45
Kudos Received
16
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
3769 | 12-13-2018 05:01 PM | |
10506 | 09-07-2018 06:12 AM | |
2753 | 08-02-2018 07:04 AM | |
3715 | 03-26-2018 07:38 AM | |
2818 | 12-06-2017 07:53 AM |
12-26-2016
01:41 PM
Hey, @Farrukh Naveed Anjum can you run vagrant box update and see if it resolves your issue?
... View more
12-16-2016
05:56 AM
Hey @Earl Hinkle, please post the output of incubator-metron/metron-deployment/scripts/platform-info.sh
... View more
10-27-2016
02:49 PM
Ahh.... I just figured out that I can use the Hosts -> Actions -> Selected Host -> Supervisors -> Add. I did not look thoroughly enough earlier, duh :/.
... View more
10-27-2016
02:27 PM
In an existing 12-node HDP stack, I have Storm Supervisors running on nodes 2 through 6. Now, I would like to add Supervisors on two more nodes (11 and 12). Can you please tell me how this can be done? I have tried the Add Service wizard, but it would not let me do it. I also tried to "Move" the Supervisor on one node (say 4) to the node 12, but apparently that is not allowed as well. Any help is much appreciated. Thanks, Anand
... View more
Labels:
- Labels:
-
Apache Storm
10-10-2016
08:09 AM
1 Kudo
This is a well written article, very useful indeed. Thank you, @Michael Young!
... View more
10-04-2016
09:32 AM
4 Kudos
Pre-requisite
Working Metron cluster - deployed via ansible-playbook or via Ambari + Mpack.
The node on which opentaxii service is being deployed should have access to HBASE. Step 1 - Deploy Opentaxii Role (Optional - if not deployed)
a) Create a playbook to deploy the opentaxii role
[root@metron-test ~]# cat metron/metron-deployment/playbooks/install-opentaxii.yml
- hosts: metron
become: true
roles:
- role: opentaxii
b) Deploy using ansible-playbook
[root@metron-test ~]# ansible-playbook -i ~/metron-deployment/inventory/metron_example playbooks/install-opentaxii.yml -e ansible_python_interpreter=python -e ansible_user=root -e ansible_ssh_private_key_file=/path/to/private-keypair.pem -vvv
c) Verify the service has been deployed successfully using the command:
service opentaxii status
This should show the list of subscribed services along with threat feed counts. Here is a sample output:
[root@metron-test]# service opentaxii status
guest.phishtank_com 888
guest.Abuse_ch 0
guest.CyberCrime_Tracker 0
guest.EmergingThreats_rules 0
guest.Lehigh_edu 0
guest.MalwareDomainList_Hostlist 0
guest.blutmagie_de_torExits 648
guest.dataForLast_7daysOnly 1124
guest.dshield_BlockList 0
Note:
In case the following is noticed
[root@node1 ~]# service opentaxii status
Checking opentaxii... Running
Services not defined
Refer to
METRON-484 for more details and a workaround. Step 2 - Fetch Latest Opentaxii Feeds
Use the following command to fetch the latest hailataxii feeds into the opentaxii server
service opentaxii sync <service-name> [YYYY-MM-DD]
For e.g.
service opentaxii sync guest.phishtank_com
service opentaxii sync guest.Abuse_ch 2016-08-01
Note: The date (YYYY-MM-DD) indicates the time from when the threat intel feeds is to be pulled. If not suffixed, then the sync command picks up feeds available for the current day.
The above process can be repeated for all the subscribed services. Step 3 - Load Opentaxii Feeds into HBASE
Create sample extractor.json and connection_config.json files as follows:
[root@metron-test]# cat ~/extractor.json
{
"config": {
"columns": {
"ip": 0
},
"indicator_column": "ip",
"type" : "malicious_ip",
"separator" : ","
},
"extractor" : "STIX"
}
[root@metron-test]# cat ~/connection_config.json
{
"endpoint" : "http://localhost:9000/services/discovery"
,"username" : "guest"
,"password" : "guest"
,"type" : "DISCOVER"
,"collection" : "guest.MalwareDomainList_Hostlist"
,"table" : "threatintel"
,"columnFamily" : "t"
,"allowedIndicatorTypes" : [ "domainname:FQDN", "address:IPV_4_ADDR" ]
}
Now, push the hailataxii feeds from the opentaxii server into HBASE using the following script:
/usr/metron/<METRON_VERSION>/bin/threatintel_taxii_load.sh -b <START_TIME> -c /path/to/connection_config.json -e /path/to/extractor.json -p <TIME_INTERVAL_MSECS>
For e.g.
/usr/metron/0.2.0BETA/bin/threatintel_taxii_load.sh -b "2016-08-01 00:00:00" -c ~/connection_config.json -e ~/extractor.json -p 10000
Step 4 - Verify in HBASE
Query the Hbase table to check for the threat intel feeds.
echo "scan 'threatintel'" | hbase shell
... View more
Labels:
10-03-2016
09:49 AM
1 Kudo
Hi @Naveen Maheswaran, I found the following to be working fine. I tried with a quick dev deployment and it went through fine. Please check if you are able to get similar results in your case. 1) To deploy without pycapa role - Edit run.sh and add 'pycapa' to the skip tags (testmetron) ➜ quick-dev-platform git:(master) ✗ pwd
~/Metron/incubator-metron-fork/incubator-metron/metron-deployment/vagrant/quick-dev-platform
(testmetron) ➜ quick-dev-platform git:(master) ✗ tail -5 run.sh
vagrant \
--ansible-tags="hdp-deploy,metron" \
--ansible-skip-tags="solr,yaf,pycapa" \
up
2) Verify /opt/pycapa is absent in the vagrant node (testmetron) ➜ quick-dev-platform git:(master) ✗ vagrant ssh
Last login: Mon Oct 3 09:14:30 2016 from 192.168.66.1
[vagrant@node1 ~]$ ls /opt/pycapa
ls: cannot access /opt/pycapa: No such file or directory
3) Next deploy 'pycapa' role alone using the vagrant command: (testmetron) ➜ quick-dev-platform git:(master) ✗ vagrant --ansible-tags="pycapa" provision
Running with ansible-tags: ["pycapa"]
==> node1: Running provisioner: ansible...
node1: Running ansible-playbook...
<snip> 4) Now see that the pycapa installables are created under /opt (testmetron) ➜ quick-dev-platform git:(master) ✗ vagrant ssh
[vagrant@node1 ~]$ ls /opt/pycapa/pycapa*
/opt/pycapa/pycapa:
build dist LICENSE pycapa pycapa.egg-info README.md requirements.txt setup.py VERSION
/opt/pycapa/pycapa-venv:
bin include lib lib64 pip-selfcheck.json share tests
Let me know if this works. Cheers, Anand
... View more
09-27-2016
03:05 PM
FWIW I was able to run quick-dev installation without any issues.
... View more
09-27-2016
09:49 AM
1 Kudo
Hey HS, I think this is most likely an issue with master. I am seeing the same error on my box as well. <node1> ESTABLISH SSH CONNECTION FOR USER: vagrant
<node1> SSH: EXEC ssh -C -q -o UserKnownHostsFile=/dev/null -o IdentitiesOnly=yes -i /Users/asubramanian/Desktop/Metron/incubator-metron-fork/incubator-metron/metron-deployment/vagrant/full-dev-platform/.vagrant/machines/node1/virtualbox/private_key -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=30 -o ControlPath=/Users/asubramanian/.ansible/cp/%h-%p-%r -tt node1 '/bin/sh -c '"'"'sudo -H -S -n -u root /bin/sh -c '"'"'"'"'"'"'"'"'echo BECOME-SUCCESS-gavsahqeydndosbmwnmihimsyouqkvhh; LANG=en_US.UTF-8 LC_ALL=en_US.UTF-8 LC_MESSAGES=en_US.UTF-8 /usr/bin/python /home/vagrant/.ansible/tmp/ansible-tmp-1474969027.04-192863002921867/ambari_cluster_state; rm -rf "/home/vagrant/.ansible/tmp/ansible-tmp-1474969027.04-192863002921867/" > /dev/null 2>&1'"'"'"'"'"'"'"'"''"'"''
fatal: [node1]: FAILED! => {"changed": false, "failed": true, "invocation": {"module_args": {"blueprint_name": null, "blueprint_var": null, "cluster_name": "metron_cluster", "cluster_state": "started", "configurations": null, "host": "node1", "password": "admin", "port": 8080, "username": "admin", "wait_for_complete": true}, "module_name": "ambari_cluster_state"}, "msg": "Ambari client exception occurred: No JSON object could be decoded"}
Here's my platform info output: (testmetron) asubramanian:incubator-metron asubramanian$ metron-deployment/scripts/platform-info.sh
Metron 0.2.0BETA
--
* master
--
commit 3d5f279caea2b9349a05b3a7316ef19e2ca8cb11
Author: cstella <cestella@gmail.com>
Date: Mon Sep 26 18:00:22 2016 -0400
METRON-453: Add a stellar shell function to open an external editor and return the editor's contents closes apache/incubator-metron#272
--
metron-deployment/inventory/metron_example/hosts | 6 +++---
metron-deployment/vagrant/full-dev-platform/Vagrantfile | 1 +
metron-deployment/vagrant/quick-dev-platform/Vagrantfile | 1 +
metron-deployment/vagrant/quick-dev-platform/run.sh | 2 +-
4 files changed, 6 insertions(+), 4 deletions(-)
--
ansible 2.0.0.2
config file =
configured module search path = Default w/o overrides
--
Vagrant 1.8.1
--
Python 2.7.10
--
Apache Maven 3.3.9 (bb52d8502b132ec0a5a3f4c09453c07478323dc5; 2015-11-10T22:11:47+05:30)
Maven home: /usr/local/Cellar/maven/3.3.9/libexec
Java version: 1.8.0_91, vendor: Oracle Corporation
Java home: /Library/Java/JavaVirtualMachines/jdk1.8.0_91.jdk/Contents/Home/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "mac os x", version: "10.11.6", arch: "x86_64", family: "mac"
--
Darwin asubramanian.local 15.6.0 Darwin Kernel Version 15.6.0: Mon Aug 29 20:21:34 PDT 2016; root:xnu-3248.60.11~1/RELEASE_X86_64 x86_64
IMHO this calls for logging a defect. Note that you can make the following changes to get a verbose output when you run provision. - Edit the Vagrant file - Add "ansible.verbose = "vvv"" in the bottom of the file where the other parameters are defined. Something like this: <snip>
ansible.inventory_path = "../../inventory/full-dev-platform"
ansible.verbose = "vvv"
end
end
Regards, Anand
... View more
09-23-2016
09:46 AM
Thank you for the solution. I found this helpful while upgrading from 2.2.2 to 2.4. Cheers!
... View more
- « Previous
- Next »