Member since
07-25-2019
184
Posts
42
Kudos Received
39
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1014 | 03-25-2019 02:25 PM | |
550 | 01-22-2019 02:37 PM | |
493 | 01-16-2019 04:21 PM | |
1034 | 10-17-2018 12:22 PM | |
661 | 08-28-2018 08:31 AM |
09-22-2017
01:39 PM
@Matt Andruff This might not be the root cause, as HDP-UTILS.repo file is generated before posting the blueprint by Cloudbreak. The default value is specified in application.yml (under hdp.entries.2.6.repo.util, it's value is HDP-UTILS-1.1.0.21), are you sure have not accidentally overwritten this parameter with custom value under [cloudbreak-deployment]/etc? You can check in Cloudbreak database, you should see the value for : docker exec -it cbreak_commondb_1 psql -U postgres
select * from clustercomponent where componenttype ='HDP_REPO_DETAILS' where cluster_id = [your-cluster-id]; You should see sg. like this in the attributes text: "util":{"repoid":"HDP-UTILS-1.1.0.21"... In the meantime, can you tell the exact version of Cloudbreak? Hope this helps!
... View more
09-19-2017
12:28 PM
@Matt Andruff What were the exact steps in CB shell for creating that cluster? Haven't you missed setting the "utilsRepoId" for "cluster create"? --utilsRepoId string Stack utils repoId (e.g. HDP-UTILS-1.1.0.21) Here is the documentation: https://github.com/hortonworks/cloudbreak/tree/master/shell Hope this helps!
... View more
09-05-2017
04:26 PM
@Henrique Silva Thanks, you are correct, this is an issue and it is already fixed. You might want to try the latest, 1.16.4 version, the fix is included there. Hope this helps!
... View more
08-24-2017
12:31 PM
@Marcel Fest From the logs, it looks like that your deployer VM does not have Internet access, which is mandatory to be able to communicate with the Cloud Providers. Deployer without Internet access is not supported. Hope this helps!
... View more
08-21-2017
01:37 PM
@Marcel Fest Could you please send me the output of the following? cbd doctor TRACE=1 cbd init
... View more
08-10-2017
12:05 PM
@soumya swain Have you checked this tutorial? It has some steps not listed in the official docs.
... View more
08-09-2017
09:30 AM
@Dominika Bialek (Q1): It is assumed that it is the name of an already existing bucket (Q2): It is an optional (and deprecated) configuration parameter (mapped to "fs.gs.system.bucket" in core-site.xml) to set the GCS bucket to use as a default bucket for URI-s without "gs:" prefix. Please see more here and here and here. Hope this helps!
... View more
07-28-2017
03:17 PM
@Alex McLintock Good news is that Cloudbreak HDF support is going to be part of next release! In the meantime if you think your original question was answered, could you please consider accepting it?
... View more
07-28-2017
01:29 PM
@Vincent Wang It has a slightly different name than it is written in the tutorial: It is natural that it redirects to the Azure Portal as that is the launching point. The Sandbox can be searched for either version 2.4 or 2.5 from the Azure Portal, too: Hope this helps!
... View more
07-24-2017
12:43 PM
@Prasad T You need to configure your Knox topology and add YARNUI service snippet E.g <service>
<role>YARNUI</role>
<url>http://sandbox.hortonworks.com:8088</url>
</service>
This is a official documentation. The changes should be applied automatically. With all the other settings unchanged the service has to be accessible from a URL scheme like: https://{gateway-host}:{gateway-port}/{gateway-path}/{cluster-name}/yarn E.g. https://10.0.0.1:8443/gateway/hdc/yarn/ Hope this helps!
... View more
07-19-2017
09:27 PM
@Juhi Garg If you find my answer satisfactory, would you please consider accepting it? Thanks!
... View more
07-17-2017
03:42 PM
2 Kudos
@Alex McLintock Unfortunately HDF integration in Cloudbreak is a work in progress now with a Blueprint configuration aspect as well as a Cloudbreak code modification aspect. If you are looking to demo just NiFi on an HDP cluster, then you can use this: https://github.com/abajwa-hw/ambari-nifi-service It is unsupported and not intended for production uses though. If you would like to install HDF 3.0 into an existing cluster, you can follow this tutorial. There is a separate tutorial for installing to a new cluster. Hope this helps!
... View more
06-26-2017
12:51 PM
Repo Description Each version of Cloudbreak Cloud Controller could be easily deployed to an Azure VM, if you select the tag for the version you want, and click on.. This takes around 5 minutes and the easiest way to get Cloudbreak up and running on Azure. Repo Info Github Repo URL https://github.com/sequenceiq/azure-cbd-quickstart/ Github account name sequenceiq Repo name azure-cbd-quickstart/
... View more
- Find more articles tagged with:
- azure
- Cloud & Operations
- Cloudbreak
- utilities
Labels:
06-23-2017
12:49 PM
@Farrukh Munir If you think your original question was answered, would you consider accepting it? Thank you!
... View more
06-22-2017
04:36 PM
@suresh krish Could you please elaborate on what do you want to achieve exactly?
... View more
06-22-2017
03:00 PM
@Lance Lierheimer I am afraid this use-case is not supported yet by Cloudbreak. If you have a Hortonworks Solution Engineer contact, he can file a feature request for this requirement. Hope this helps!
... View more
06-21-2017
03:08 PM
Then please share all Cloudbreak logs, because it is difficult to tell what went wrong otherwise.
... View more
06-21-2017
09:19 AM
@Chokri Ben Necib It looks like the cluster install polling times out, this might be due to other long-running recipes. You can try increasing the default values of 90 retry attempts (/5 sec) in Profile (then pls. do a cbd restart): export CB_MAX_SALT_NEW_SERVICE_RETRY=90
export CB_MAX_SALT_RECIPE_EXECUTION_RETRY=90
I've tried to reproduce the error with the same recipe, but my recipe had run successfully. This is the command which runs the recipe, you can check the logs in the EC2 instance under "/var/log/recipes": "sh -x /opt/scripts/pre/support-timeout-recipe 2>&1 | tee -a /var/log/recipes/pre-support-timeout-recipe.log && exit ${PIPESTATUS[0]}" Are you sure that you applied the recipe to all the hostgroups? E.g. the one with Ambari? Hope this helps!
... View more
06-19-2017
03:41 PM
@Chokri Ben Necib What kind of recipes (pre/post) would you like to run? Could you please send us the exception itself? It should be in cloudbreak log if you type: cbd logs cloudbreak
... View more
06-16-2017
02:44 PM
@Farrukh Munir All the resources created in Cloudbreak are saved to a Postgres database called cbdb running in a docker container called "cbreak_commondb_1". You can check the details of the container running the following under /var/lib/cloudbreak-deployment: cbd ps You can connect to the db via port 5432. The sensitive data is encrypted. Hope this helps!
... View more
06-15-2017
08:27 PM
@Farrukh Munir All the requirements that you specified (custom image, exsiting vpc, private subnet) can be fulfilled with Cloudbreak so afaik that is the best practice what Hortonworks supports, I am not aware of documented fully manual setup.
... View more
06-15-2017
02:28 PM
@Farrukh Munir What is your use case? Both Cloudbreak and HDC have the exact purpose to ease the deployment complexity, so I recommend going with one of them, unless you have some very specific reason not to do so. Cloudbreak offers deeper customization while HDC is easier to set up. Both are deployed in an IaaS model, you can reuse your private subnet as well, you can check the available setups here. Both are using CloudFormation templates to bring up the stacks. HDP versions are fully customizable (either 2.4, 2.5 or 2.6) in Cloudbreak. If you are happy with a more perspective approach then you can check this. Hope this helps!
... View more
06-15-2017
11:44 AM
1 Kudo
@Farrukh Munir Hortonworks offers Hortonworks Data Cloud and Cloudbreak for such scenarios in AWS. If you would like to use HDC, you can find a reference architecture for AWS here and for Data Lake concept, here. Hope this helps!
... View more
06-12-2017
12:14 PM
@Smart Solutions AFAIK, Knox uses EHCache, which can be configured further by placing a ehcache.xml file
in an appropriate location in classpath as written here: <param>
<name>main.cacheManager.cacheManagerConfigFile</name>
<value>classpath:ehcache.xml</value>
</param>
The content of the config file is described here, with a concrete example here. You should check "timeToIdleSeconds" (defaults to 120 seconds). Hope this helps!
... View more
06-12-2017
08:03 AM
1 Kudo
@Juhi Garg It is not clear for me based on your description if you have successfully set up a storage account and created a container before trying to use WASB (which is a prerequisite). Have you followed the steps detailed here? There is a very good walkthrough here! And finally, there is a good technical overview here. Hope this helps!
... View more
06-06-2017
10:01 PM
@George Meltser If my answer helped you solving your problem, would you mind consider accepting it? Thanks in advance!
... View more
06-01-2017
01:07 PM
@Aneesh Ramadoss Please note that supporting custom images and certification of custom images is the responsibility of Hortonworks Professional Services, so please contact them, if you would like to use a certified, supported image.
There is an unsupported solution, though, you can enable a custom image selector in the UI with adding the following line to your Cloudbreak Profile: export CB_JAVA_OPTS=-Dcb.enable.custom.image=true After restarting Cloudbreak (cbd restart), you should see the following input field under "Configure Cluster" tab, after clicking "Show Advanced Options" custom-image.png You can enter your custom AMI there. Hope this helps!
... View more
05-29-2017
04:25 PM
@George Meltser What is the exact error message, have you managed to solve it? You might find some useful information about yum setup in the official Amazon FAQ here. Hope this helps!
... View more
05-26-2017
09:54 AM
@George Meltser As a first step, you should boot an Amazon Linux instance, and run all the commands on this machine after SSH-ing to it. You can do that with logging in to your AWS account, selecting EC2 service, click on "Launch instance" and select Amazon Linux. You should see sg. like this: After that, you should choose an instance with at least 16 gb of RAM and SSH to it if it is ready. You can find some additional documentation here. Hope this helps (if yes, would you consider accepting my response? :))
... View more
- « Previous
- Next »