Member since
03-23-2016
56
Posts
20
Kudos Received
7
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2377 | 03-16-2018 01:47 PM | |
1740 | 11-28-2017 06:41 PM | |
6501 | 10-04-2017 02:19 PM | |
1782 | 09-16-2017 07:19 PM | |
4907 | 01-03-2017 05:52 PM |
05-04-2016
01:22 PM
And you probably already have this covered, but do not use the "--limit @playbook.retry" as I think you mentioned in your second posting.
... View more
05-04-2016
01:17 PM
I recently added a shell script to make the process easier and to help catch common errors. If you're bold enough to try master, I'd love to see if that works for you. If you are able to launch EC2 hosts, with your Amazon credentials, then you can start here: https://github.com/apache/incubator-metron/tree/master/metron-deployment/amazon-ec2#deploy-metron Thanks for giving this a go!
... View more
05-04-2016
01:15 PM
Are you sure that you exported this environment variable? export EC2_INI_PATH=conf/ec2.ini
... View more
05-03-2016
09:54 PM
You do not want to create an ec2.py file. That will certainly break things good. Please outline exactly what you do step-by-step when you 'start from scratch'. That should allow us to help more. Thanks!
... View more
04-27-2016
06:42 PM
1 Kudo
Sagar is running behind a corporate proxy. He had configured the proxy in both his Mac system settings and global Vagrant settings. When he issued a web request to "node1:8080" or "192.168.x.x:8080" it was going to the proxy server which then redirected him to the "node1.com" web address. We updated the Mac and Vagrant settings to bypass the proxy for that specific IP address. This resolved the immediate issue. As a follow-on, we need to update those same settings to bypass the proxy for any IP address in the private subnets of 10.0.0.0/8 and 192.168.0.0/16. This should help avoid problems with future deployments of Metron should it choose to grab a different IP address. He is attempting a fresh deployment with these proxy settings. We are not out of the woods yet, but that's progress at least. @sagar gaikwad, please update us on how the deployment goes.
... View more
04-26-2016
09:21 PM
A log should exist at 'deployment/vagrant/singlenode-vagrant/ansible.log'. Can you attach that here in HCC?
... View more
04-26-2016
09:09 PM
Have you logged into Ambari to check it out? Try logging into http://node1:8080 as admin/admin. What does that look like? Do you see much red?
... View more
04-13-2016
05:37 PM
Looks like Ambari died. It would be useful to extract /var/log/ambari-server/ambari-server.log from that host and share that with us. The simplest option is to terminate your hosts in EC2 and start the deployment again as @rmerriman suggested. If you run into the same issue, please share the deployment/amazon-ec2/ansible.log file that is created.
... View more
04-13-2016
12:05 PM
The fact that you can see the host in your EC2 Dashboard tells me that your AWS/IAM setup is probably not a problem.
The status checks provided by AWS are extremely high-level though. We need to look more closely at Ambari. Login to the box and see if Ambari is even running. ssh centos@ec2-52-38-224-98.us-west-2.compute.amazonaws.com
service ambari-server status
... View more
- « Previous
- Next »