Member since
09-15-2015
75
Posts
33
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1451 | 02-22-2016 09:32 PM | |
2332 | 12-11-2015 03:27 AM | |
8497 | 10-26-2015 10:16 PM | |
7685 | 10-15-2015 06:09 PM |
05-03-2016
02:32 PM
'pip uninstall ansible' did the trick to remove ansible 2.0.2. Successfully installed ansible 2.0.0.2. Will re-run 'vagrant up'.
... View more
05-03-2016
02:23 PM
Breaking my comment prior since its over the character limit. I checked my ansible version and it's using ansible 2.0.2. Uninstalled it and re-install 2.0.0.2 but getting this message. $ pip install ansible==2.0.0.2
Requirement already satisfied (use --upgrade to upgrade): ansible==2.0.0.2 in /usr/local/lib/python2.7/site-packages
Requirement already satisfied (use --upgrade to upgrade): paramiko in /usr/local/lib/python2.7/site-packages (from ansible==2.0.0.2)
Requirement already satisfied (use --upgrade to upgrade): setuptools in /usr/local/lib/python2.7/site-packages (from ansible==2.0.0.2)
Requirement already satisfied (use --upgrade to upgrade): PyYAML in /usr/local/lib/python2.7/site-packages (from ansible==2.0.0.2)
Requirement already satisfied (use --upgrade to upgrade): pycrypto>=2.6 in /usr/local/lib/python2.7/site-packages (from ansible==2.0.0.2)
Requirement already satisfied (use --upgrade to upgrade): jinja2 in /usr/local/lib/python2.7/site-packages (from ansible==2.0.0.2)
Requirement already satisfied (use --upgrade to upgrade): pyasn1>=0.1.7 in /usr/local/lib/python2.7/site-packages (from paramiko->ansible==2.0.0.2)
Requirement already satisfied (use --upgrade to upgrade): cryptography>=1.1 in /usr/local/lib/python2.7/site-packages (from paramiko->ansible==2.0.0.2)
Requirement already satisfied (use --upgrade to upgrade): MarkupSafe in /usr/local/lib/python2.7/site-packages (from jinja2->ansible==2.0.0.2)
Requirement already satisfied (use --upgrade to upgrade): enum34 in /usr/local/lib/python2.7/site-packages (from cryptography>=1.1->paramiko->ansible==2.0.0.2)
Requirement already satisfied (use --upgrade to upgrade): ipaddress in /usr/local/lib/python2.7/site-packages (from cryptography>=1.1->paramiko->ansible==2.0.0.2)
Requirement already satisfied (use --upgrade to upgrade): six>=1.4.1 in /usr/local/lib/python2.7/site-packages (from cryptography>=1.1->paramiko->ansible==2.0.0.2)
Requirement already satisfied (use --upgrade to upgrade): idna>=2.0 in /usr/local/lib/python2.7/site-packages (from cryptography>=1.1->paramiko->ansible==2.0.0.2)
Requirement already satisfied (use --upgrade to upgrade): cffi>=1.4.1 in /usr/local/lib/python2.7/site-packages (from cryptography>=1.1->paramiko->ansible==2.0.0.2)
Requirement already satisfied (use --upgrade to upgrade): pycparser in /usr/local/lib/python2.7/site-packages (from cffi>=1.4.1->cryptography>=1.1->paramiko->ansible==2.0.0.2)
... View more
05-03-2016
02:22 PM
I'm getting this error trying the vagrant install. This comes after running 'vagrant up' command. TASK [ambari_config : Start All Hadoop Services node1] *************************
failed: [node1] (item=HDFS) => {"connection": "close", "content": "", "content_type": "text/plain;charset=ISO-8859-1", "failed": true, "item": "HDFS", "msg": "Status code was not [200, 202]: HTTP Error 500: Server Error", "redirected": false, "server": "Jetty(8.1.17.v20150415)", "set_cookie": "AMBARISESSIONID=1sgn1chrad41b1qw1kj03dd9kg;Path=/;HttpOnly", "status": 500, "url": "http://node1:8080/api/v1/clusters/metron_cluster/services/HDFS", "user": "admin"}
failed: [node1] (item=YARN) => {"connection": "close", "content": "", "content_type": "text/plain;charset=ISO-8859-1", "failed": true, "item": "YARN", "msg": "Status code was not [200, 202]: HTTP Error 500: Server Error", "redirected": false, "server": "Jetty(8.1.17.v20150415)", "set_cookie": "AMBARISESSIONID=1is6xyjxvpccqyba2f46le9sl;Path=/;HttpOnly", "status": 500, "url": "http://node1:8080/api/v1/clusters/metron_cluster/services/YARN", "user": "admin"}
failed: [node1] (item=MAPREDUCE2) => {"connection": "close", "content": "", "content_type": "text/plain;charset=ISO-8859-1", "failed": true, "item": "MAPREDUCE2", "msg": "Status code was not [200, 202]: HTTP Error 500: Server Error", "redirected": false, "server": "Jetty(8.1.17.v20150415)", "set_cookie": "AMBARISESSIONID=1nvb5mwkf1a0r19a051px2z1da;Path=/;HttpOnly", "status": 500, "url": "http://node1:8080/api/v1/clusters/metron_cluster/services/MAPREDUCE2", "user": "admin"}
failed: [node1] (item=ZOOKEEPER) => {"connection": "close", "content": "", "content_type": "text/plain;charset=ISO-8859-1", "failed": true, "item": "ZOOKEEPER", "msg": "Status code was not [200, 202]: HTTP Error 500: Server Error", "redirected": false, "server": "Jetty(8.1.17.v20150415)", "set_cookie": "AMBARISESSIONID=1t0e24juqq5be1oneug42xd9fy;Path=/;HttpOnly", "status": 500, "url": "http://node1:8080/api/v1/clusters/metron_cluster/services/ZOOKEEPER", "user": "admin"}
failed: [node1] (item=HBASE) => {"connection": "close", "content": "", "content_type": "text/plain;charset=ISO-8859-1", "failed": true, "item": "HBASE", "msg": "Status code was not [200, 202]: HTTP Error 500: Server Error", "redirected": false, "server": "Jetty(8.1.17.v20150415)", "set_cookie": "AMBARISESSIONID=1v10mm1sa85tj1jd0cxg53kmfr;Path=/;HttpOnly", "status": 500, "url": "http://node1:8080/api/v1/clusters/metron_cluster/services/HBASE", "user": "admin"}
failed: [node1] (item=STORM) => {"connection": "close", "content": "", "content_type": "text/plain;charset=ISO-8859-1", "failed": true, "item": "STORM", "msg": "Status code was not [200, 202]: HTTP Error 500: Server Error", "redirected": false, "server": "Jetty(8.1.17.v20150415)", "set_cookie": "AMBARISESSIONID=1xso3y6yntj86150i2ne8ixqdl;Path=/;HttpOnly", "status": 500, "url": "http://node1:8080/api/v1/clusters/metron_cluster/services/STORM", "user": "admin"}
failed: [node1] (item=KAFKA) => {"connection": "close", "content": "", "content_type": "text/plain;charset=ISO-8859-1", "failed": true, "item": "KAFKA", "msg": "Status code was not [200, 202]: HTTP Error 500: Server Error", "redirected": false, "server": "Jetty(8.1.17.v20150415)", "set_cookie": "AMBARISESSIONID=1e2nei5aurn69eq1lcnvfcnna;Path=/;HttpOnly", "status": 500, "url": "http://node1:8080/api/v1/clusters/metron_cluster/services/KAFKA", "user": "admin"}
to retry, use: --limit @../../playbooks/metron_full_install.retry
PLAY RECAP *********************************************************************
node1 : ok=38 changed=28 unreachable=0 failed=1
Ansible failed to complete successfully. Any error output should be
visible above. Please fix these errors and try again.
... View more
05-03-2016
02:04 PM
When I re-run the ansible playbook command with this option '--limit @playbook.retry', it's saying ec2.py file is not found and subsequent messages relates to ec2 hosts not found in the file. So i went ahead and created the ec2.py file with the hostnames separated by new line. It's probably a different issue just don't know what it is.
... View more
05-02-2016
09:16 PM
I tried deleting the id_rsa.pub file and re-created it but still getting the error. I created the ec2.py file where it's got the hostnames of all ec2 instances. I re-run the command with --limit @playbook.retry but not getting any results. ansible-playbook -i ec2.py playbook.yml --limit @playbook.retry
[WARNING]: Host file not found: ec2.py
[WARNING]: provided hosts list is empty, only localhost is available
PLAY [localhost] ***************************************************************
skipping: no hosts matched
PLAY [ec2] *********************************************************************
skipping: no hosts matched
PLAY [ec2] *********************************************************************
skipping: no hosts matched
PLAY [ec2] *********************************************************************
skipping: no hosts matched
PLAY [ambari_*] ****************************************************************
skipping: no hosts matched
PLAY [ambari_master] ***********************************************************
skipping: no hosts matched
PLAY [ambari_slave] ************************************************************
skipping: no hosts matched
PLAY [ambari_master] ***********************************************************
skipping: no hosts matched
PLAY [ec2] *********************************************************************
skipping: no hosts matched
PLAY [metron] ******************************************************************
skipping: no hosts matched
PLAY [hadoop_client] ***********************************************************
skipping: no hosts matched
PLAY [search] ******************************************************************
skipping: no hosts matched
ec2-52-25-211-42.us-west-2.compute.amazonaws.com
PLAY [mysql] *******************************************************************
skipping: no hosts matched
PLAY [ambari_slave] ************************************************************
skipping: no hosts matched
PLAY [sensors] *****************************************************************
skipping: no hosts matched
PLAY [enrichment] **************************************************************
skipping: no hosts matched
PLAY [web] *********************************************************************
skipping: no hosts matched
PLAY [localhost] ***************************************************************
skipping: no hosts matched
PLAY RECAP *********************************************************************
... View more
05-02-2016
08:55 PM
I'm getting the following error when running ansible-playbook -i ec2.py playbook.yml.
TASK [Sanity check Metron web] *************************************************
fatal: [localhost -> localhost]: FAILED! => {"changed": false, "elapsed": 20, "failed": true, "msg": "Timeout when waiting for ec2-52-39-145-200.us-west-2.compute.amazonaws.com:5000"}
to retry, use: --limit @playbook.retry
PLAY RECAP *********************************************************************
ec2-52-26-145-189.us-west-2.compute.amazonaws.com : ok=2 changed=0 unreachable=1 failed=0
ec2-52-33-71-234.us-west-2.compute.amazonaws.com : ok=2 changed=0 unreachable=1 failed=0
ec2-52-34-108-164.us-west-2.compute.amazonaws.com : ok=2 changed=0 unreachable=1 failed=0
ec2-52-36-240-171.us-west-2.compute.amazonaws.com : ok=2 changed=0 unreachable=1 failed=0
ec2-52-37-137-111.us-west-2.compute.amazonaws.com : ok=2 changed=0 unreachable=1 failed=0
ec2-52-38-193-214.us-west-2.compute.amazonaws.com : ok=2 changed=0 unreachable=1 failed=0
ec2-52-39-145-200.us-west-2.compute.amazonaws.com : ok=2 changed=0 unreachable=1 failed=0
ec2-52-39-201-67.us-west-2.compute.amazonaws.com : ok=2 changed=0 unreachable=1 failed=0
ec2-52-39-205-252.us-west-2.compute.amazonaws.com : ok=2 changed=0 unreachable=1 failed=0
ec2-52-39-88-236.us-west-2.compute.amazonaws.com : ok=2 changed=0 unreachable=1 failed=0
localhost : ok=33 changed=16 unreachable=0 failed=1
I updated the ansible.cfg as outline here https://community.hortonworks.com/questions/24344/aws-unreachable-error-when-executing-metron-instal.html to fix similar issue but still getting the same error.
... View more
Labels:
- Labels:
-
Apache Metron
04-07-2016
09:40 PM
It works. But is it secure? No. Unauthorized impersonation is the biggest problem in the cluster. With Kerberos you won't have this problem. When you run the sync command from the linux box, you need to have a user principal that can get kerberos tickets for authN. AD Group Mapping must be in sync with OS/HDFS to ensure consistent authZ across components.
... View more
02-22-2016
09:32 PM
2 Kudos
Scott, there's two layers of memory settings that you need to be aware of - NodeManager and Containers. NodeManager has all the available memory it can provide to containers. You want to have more containers with decent memory. Rule of thumb is to use 2048MB of memory per container. So if you have 53GB of available memory per node, then you have about 26 containers available per node to do the job. 8GB of memory per container IMO is too big. We don't know how many disks are there to be used by Hadoop from the SAN storage. You can disregard the disks in the equation as the formula is typically done for on-premise clusters. But you can run a manual calculation of the memory settings since you have the minimum container per node and memory per container values (26, 2048MB respectively). You can use the formula below. Just replace the # of containers per node and RAM per container with your values. Please note that 53GB of available ram per vm is too big knowing it only has 54GB RAM. Typically, you would want to set aside about 8GB for other processes - OS, HBase, etc. which means available memory per node is just 46GB. Hope this helps.
... View more
02-01-2016
06:50 PM
Are you still having issues babu?
... View more
12-11-2015
03:27 AM
Are you using Hive? The only support engine for Hive to run with HDFS TDE is MR. Hive on Tez doesn't work yet today but will be in the near future. If you are using Hive on MR, can you post here the exception? What is your ranger policy defined for that encrypted folder?
... View more