Member since
09-30-2015
26
Posts
11
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2000 | 09-01-2016 07:50 AM | |
11747 | 04-21-2016 12:29 PM | |
2591 | 04-08-2016 06:59 AM | |
1158 | 12-02-2015 03:49 PM |
05-04-2017
12:06 PM
Okay, we'll need to see the Cloudbreak logs. You can get it by ssh-ing to the controller machine and get it with the command: "cbd logs cloudbreak". You can also check the autoscaling tab on the EC2 console (Scaling history) and check if the VMs are available and healthy. @cduby @DL
... View more
05-03-2017
03:37 PM
@cduby @DL If you're using spot instances, can you try to increase the bid price, or without spot instances first? I can see on the screenshots that infrastructure creation is taking more than an hour (it should be around 5 mins with on-demand instances, and a few mins more with spot priced instances). You can try to check the spot requests page and see if there are any messages there, like "request is pending". So the main problem is that infra creation is taking too much time and that's why the temporary credentials used by HDC are expired - it is still a bug because spot instance requests can sometimes take more than an hour to fulfill and HDC should be able to handle that. But for now as a workaround you should find out what is taking too long (my guess is spot requests), and change the cluster configuration according to that - different instance types, bid price, etc..
... View more
01-25-2017
10:13 AM
1 Kudo
@Joginder Sethi can you SSH to the control plane VM and send me the logs from: - /var/log/cbd-quick-start.log - the output of the "docker logs cbreak_cloudbreak_1" command I've seen this error message once when the SSH public key that was selected on the CFN create stack page had a length shorter than 2048. Please check if your public key's length is at least 2048 because only those are supported by HDC.
... View more
09-01-2016
07:58 AM
Cloudbreak 1.4.0 can deploy HDP 2.5, see my other answer here: https://community.hortonworks.com/questions/54346/which-version-of-cloudbreak-is-the-latest-123-or-1.html You can also look into "Hortonworks Data Cloud" that's an AWS only solution built on top of Cloudbreak with a simplified UI and with the capability to deploy 2.5: http://hortonworks.github.io/hdp-aws/
... View more
09-01-2016
07:50 AM
3 Kudos
Cloudbreak 1.3 is a technical preview because there was a big change in the underlying infrastructure: we got rid of docker containers completely in the cluster, so from 1.3, HDP services run directly on the VMs. In Cloudbreak 1.3 there are some missing features that were not following this big change immediately but we wanted to release a "dockerless" version as soon as possible. Since then there is already Cloudbreak 1.4 which is still considered a technical preview but I would suggest to use it instead of 1.2.3, until 2.0 is out. You can grab 1.4.0 with this curl command: curl -LsO s3.amazonaws.com/public-repo-1.hortonworks.com/HDP/cloudbreak/cloudbreak-deployer_1.4.0_$(uname)_x86_64.tgz Or you can update an existing deployment with these commands: cbd kill
cbd update rc-1.4
cbd regenerate
cbd start
... View more
04-25-2016
02:10 PM
@Arthur GREVIN Please check the logs of your UI container with docker logs cbreak_uluwatu_1
And please also send me the cloudbreak logs in email to msereg at hortonworks com. I'm not sure about this issue but maybe we can setup a webex tomorrow, it would be simpler I think.
... View more
04-25-2016
11:30 AM
@Arthur GREVIN Can you check the developers console in your browser and see if there is a JS error there? There was a bug in that version of cloudbreak that caused this sometimes on first page load but it ususally loads successfully after a few page refreshes. This bug was fixed later, but there is no official release that contains it yet. If this error doesn't disappear, you can try to update the cloudbreak-web container with a patch like this: Add this line to your cbd Profile: export DOCKER_TAG_ULUWATU=1.2.5-rc.5 and run these commands to regenerate the yml files and restart cbd: cbd kill
cbd regenerate
cbd start This will pull the latest rc container of the UI where this bug is fixed and will restart all the components. (You may need to stop your dev cloudbreak and run the local-dev command again)
... View more
04-22-2016
09:29 AM
It should be a container from image hortonworks/cloudbreak-web:1.2.3 named cbreak_uluwatu_1. Can you see it with docker ps -a as terminated? If yes can you send me the logs, or try to restart it with docker restart cbreak_uluwatu_1?
... View more
04-22-2016
09:05 AM
@Arthur GREVIN It is the API that's running on 9091 and there is nothing mapped to /. If you're checking out a URL that is mapped to an API endpoint, like 10.0.0.30:9091/api/v1/connectors, you should see a json returned. You can try it out with other pathes as well, like /api/v1/stacks that needs authentication, it should return an "unauthorized" message. To see the api docs check out /apidocs. If you're looking for the UI, it is running in a different container that is started by cbd. It is a nodejs app and it should listen on port 3000. Did you use cbd to start cloudbreak then the cbd util local-dev command to set up a dev env?
... View more