Member since
09-29-2015
40
Posts
10
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2801 | 11-01-2017 03:25 PM | |
1579 | 03-25-2017 11:15 AM | |
1351 | 08-19-2016 01:38 PM | |
2740 | 08-18-2016 02:08 PM | |
1297 | 05-12-2016 08:09 AM |
06-05-2018
07:12 AM
Hi @Bimal Mehta As the thread about not able to open the Ambari UI, you need to run the script on the node where the Ambari server is running. You should ssh to that instance run the script that will generate a new certificate for the machine's nginx with the right IP address to avoid this kind of certificate issues.
... View more
04-06-2018
02:01 PM
Hi @Dominik Kappel, OpenStack versions are supported by Cloudbreak. Unfortunately Cloudbreak doesn't support the Queens version. I tested the image by following the docs on one of our supported environments and that worked fine. I would really help you but we don't have an Infra to do that.
If you have a support subscription, feel free to open a ticket and we will happy to assist you in the further investigation. Br, Tamas
... View more
04-05-2018
12:26 PM
@Dominik Kappel I double-checked and our OS could create instance from the imported image successfully. Could you please tell more about your OS infra, version?
... View more
11-01-2017
07:36 PM
@Vadim Vaks You are welcome and thanks for the minimized query parameters.
... View more
11-01-2017
03:25 PM
1 Kudo
Hi @Vadim Vaks I tested with latest version of Cloudbreak and the "cbd util token" command is still works for me. But if you use the address of the proxy server that provides the SSL then you should use the "/cb" sub-path to send requests to the API and the endpoints could be found under "/api/v1" path like: curl -k -X GET -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" https://192.168.99.100/cb/api/v1/stacks/user On the other hand your curl command is not parameterized properly. The URL would look like: http://192.168.99.100:8089/oauth/authorize?response_type=token&client_id=cloudbreak_shell&scope.0=openid&source=login&redirect_uri=http://cloudbreak.shell Br, Tamas
... View more
04-19-2017
03:05 PM
Hi @Peter Teunissen I have checked and the image exist in the storage account, link to that: https://sequenceiqwesteurope2.blob.core.windows.net/images/hdc-hdp--1703101455.vhd Could it be possible that your deployment is not able to reach the vhd because of some custom security rule? Have you tried install a cluster again maybe just some Azure service was unavailable that time? Br, Tamas
... View more
03-25-2017
11:15 AM
1 Kudo
Hi @dbalasundaran, Default resources like Blueprints are created at the first GET request to the desired resource endpoint. When you used the UI, the default resources were created that time. After that the hdc create-cluster probably worked. You always needs to trigger the creation of default blueprints if you use them. One option is to use the UI, other option is to run "hdc list-cluster-types" that will create the default blueprints for you.
... View more
01-10-2017
04:14 PM
Hi @jhals99, I do not know which level you need exactly, but if you create a support ticket or get in touch with the Solution Engineer(SE) on the project should solve this question. LUKS encryption should work but we haven't tried it, so I am not 100% sure about that. As I know we haven't test S3A from this approach. I think you should create a question in the Hadoop and or HDFS topics. Maybe the JIRA-s in the S3A docs could be a good start for searching the right contact for these questions. Br, Tamas
... View more
10-18-2016
03:42 PM
Hi @cduby, What kind of users would you like to create automatically with Cloudbreak? Cloudbreak has a "recipes" feature that is able to run scripts after the cluster installation has been finished. Maybe it could create the desired users for you: http://sequenceiq.com/cloudbreak-docs/latest/recipes/ Br, Tamas
... View more
08-19-2016
01:38 PM
1 Kudo
Hi @Kenneth Graves, Please check that the defective node is in a slave/worker hostgroup, because the following solution based on Cloudbreak upscale functionality that only works on slave/worker hostgroups: First you should sync the cluster state, sync could be triggered through the Cloudbreak UI when the "cluster details" panel has been opened. After the sync has been applied the defective node should be marked on the node list of the "cluster details" panel and a terminate button should appeared in the last column. To replace the terminated node, just simply upscale the previously terminated node's hostgroup with new nodes. If the defective node is in a master hostgroup (that contains master components) then I am afraid there is no possible solution to replace the node through Cloudbreak and there is no easy way to replace it in Cloudbreak. Br,
Tamas
... View more