Member since
09-25-2015
142
Posts
58
Kudos Received
25
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
824 | 01-29-2019 12:44 PM | |
678 | 03-31-2018 08:10 AM | |
4108 | 03-30-2018 07:55 PM | |
845 | 09-12-2017 01:52 PM | |
1112 | 09-05-2017 05:48 PM |
09-05-2017
05:48 PM
@Henrique Silva If you would like to specify the utils repo then also add : -Dhdp.entries.2.5.repo.util.repoid=HDP-UTILS-1.1.0.21 -Dhdp.entries.2.5.repo.util.redhat6=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos6 -Dhdp.entries.2.5.repo.util.redhat7=http://public-repo-1.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7 The information will be used based on the stackversion in your blueprint file. Br, R
... View more
09-05-2017
01:58 PM
Hi @Henrique Silva This is possible If you put this into your Profile file: CB_JAVA_OPT="-Dambari.repo.version=2.5.1.0 -Dambari.repo.baseurl=http://public-repo-1.hortonworks.com/ambari/centos6/2.x/updates/2.5.1.0 -Dambari.repo.gpgkey=http://public-repo-1.hortonworks.com/ambari/centos6/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins -Dcb.hdp.entries.2.6.version=2.6.1.0-111 -Dcb.hdp.entries.2.6.repo.redhat6=http://public-repo-1.hortonworks.com/HDP/centos6/2.x/updates/2.6.1.0-111 -Dcb.hdp.entries.2.6.repo.redhat7=http://public-repo-1.hortonworks.com/HDP/centos7/2.x/updates/2.6.1.0-111" This is an example for using ambari: 2.5.1.0 and hdp 2.6.1.0 here are the defaults: https://raw.githubusercontent.com/hortonworks/cloudbreak/master/cloud-common/src/main/resources/application.yml After the modification please restart the application with cbd restart After this you will use the specified defaults but unfortunatly you will not see this on the UI. Br, R
... View more
08-24-2017
12:45 PM
http://sequenceiq.com/cloudbreak-docs/latest/operations/#proxy-settings
... View more
08-21-2017
07:46 PM
1 Kudo
@Shyam Shaw Unfortunatly this is currently not possible with Cloudbreak UI only if you configure it in your blueprint and then you are using that blueprint when you create a cluster. Try Hortonworks Data Cloud because that already knows this feature: https://hortonworks.github.io/hdp-aws/create/index.html#hive-metastore Br, R
... View more
08-21-2017
02:16 PM
It seems github has some issue this is why your install commands are not working https://status.github.com/ Br, R
... View more
08-10-2017
07:00 PM
1 Kudo
@Venkata Sudheer Kumar M Currently there is no way in Cloudbreak to reuse the same AWS instances in a future deployment. When you terminate a cluster then you terminate the whole infrastructure (Vpc, IGW, Instances, Disks). Custom AMI is a different thing. Some of the enterprise company have their own certified AMI's. They are only allowed to use those images. For example our base images based on centos7 but if you want to use ubuntu or any other distribution then you have to prepare your own images for Cloudbreak. What is your use case? why do you want to use the same instances? Do you want to decrease the cost ? Br, R
... View more
08-03-2017
02:06 PM
@Sharon Kirkham Worker nodes and compute nodes contain the same services. The basic advantage of compute nodes is, that if you want to use spot priced instances than you don't have to be afraid of losing any data because those nodes are only for compute purposes. You can also shrink down your compute group to 0 instance after the creation of the cluster. Br R
... View more
07-28-2017
04:33 PM
Hi @Akbarali Momin Q1: when you create a credential you have to define the public key to the instances. If you have a public key which generated with a private key file how it is possible that you do not have that file? Q2: currently this is not possible. What is your scenario where you need a specific install path? Q3: you can not select specific instances for cloudbreak deployment. You can use dedicated instances but I think this is not enough for you Br R
... View more
06-30-2017
08:05 AM
1 Kudo
@Sandeep Nemuri I think this is a bug in your cloudbreak version so you should delete it by hand: jump into the cloudbreak db container select id from blueprint where name=<blueprint-name>; select name,id,status from cluster where blueprint_id=<id>; update cluster set blueprint_id=null where name=<cluster_name>; delete blueprint where name=<blueprint-name>; Let me know how it went, Br, R
... View more
06-30-2017
07:51 AM
on the cloudbreak machine go to the folder where the profile file is located and please type 'cbd version'
... View more
06-29-2017
09:38 PM
@Jacob DeJong which cloudbreak version are you using?
... View more
06-26-2017
01:07 PM
Hi @Chokri Ben Necib The problem with the proxy configuration that we do not configure the proxy on the deployed clusters.
If you want to configure it on your cluster then you should do it before the salt bootstrap.
You should define this variable and then write your script into that, to configure the proxy settings on the cluster: -Dcb.customData
you can define it in you profile with: export CB_JAVA_OPTS="-Dcb.customData='touch /tmp/testdata'"
in this case the variable which you define here is a script which will run before the bootstrap process so it will configure the proxy on the cluster. Br,
R
... View more
06-06-2017
11:56 AM
1 Kudo
@prachi sharma we are not using docker images on the provisioned cluster anymore but in the past we used this one https://github.com/sequenceiq/docker-ambari which contains an ambari. Ambari is a deployment tool which can deploy any service what you want .
... View more
05-24-2017
06:30 AM
If you have data in your database then use: export UAA_DEFAULT_SECRET=cbsecret2015 in your Profile. Because the older versions on Cloudbreak used that by default. The second what you can do is if you dont have any data in your db then drop the database with: cbd kill cbd delete cbd generate cbd start Br, R
... View more
05-23-2017
09:30 AM
Hi @kkanchu Could you please provide some info how did you define the default username and password in your Profile file? You can also add default user by 'cbd util add-default-user' Br, R
... View more
03-30-2017
08:12 AM
Please accept then If you find the answer
... View more
03-30-2017
05:54 AM
2 Kudos
@Xu Zhe you can configure the namenode heap space in the blueprint file or you can try to start beefier machines. If you want to use the blueprint configuration then put this into the blueprint file: "configurations" : [ { "global" : { "namenode_heapsize" : "1536m", ... } }, { ... } ]
Br, R
... View more
03-27-2017
08:12 AM
3 Kudos
@Xu Zhe Click on the cluster creation wizard right corner. There is a Show Advanced Options button and then you will see the 'Configure Hdp Repository' tab. Br R
... View more
02-14-2017
07:56 AM
@rahul gulati Unfortunately cloudbreak currently does not support deploying the cluster on existing machines. The only way is to provision clusters if you deploy the whole cluster end to end. Delete those vm's and please start a new cluster. Br, R
... View more
02-14-2017
07:48 AM
@rahul gulati Could you please accept the answer if it is answered your question?
... View more
02-13-2017
07:21 PM
@rahul gulati Unfortunately cloudbreak currently does not support deploying the cluster on existing machines. The only way is to provision clusters if you deploy the whole cluster end to end.
... View more
02-13-2017
07:16 PM
@rahul gulati This configuration totally depends on your use case. for example on datanodes I prefer more datadisk than on masternodes and also on master you should use beefier machines.
... View more
02-13-2017
12:45 PM
@rahul gulati Which version of cloudbreak are you using ?
... View more
01-24-2017
08:30 AM
https://github.com/sequenceiq/cloudbreak/pull/2183/files added to master
... View more
12-27-2016
10:17 AM
1 Kudo
@Santhosh B Gowda echo "credential list " | java -jar cloudbreak-shell.jar --sequenceiq.user=${cloudbreak-user} --sequenceiq.password=${cloudbreak-passwrod} --identity.address=https://${cloudbreak-ip}/identity --cloudbreak.address=https://${cloudbreak-ip} --cert.validation=false >> /tmp/output.log Br, R
... View more
12-20-2016
06:36 PM
Hi @Kenneth Graves Sure thing that we can add those instance types. I just created a Jira and the next release will include those instance types. Thanks for reporting this 🙂 Br, Richard
... View more
11-14-2016
12:03 PM
Hi @hello hadoop You have right the documentation is missing, but here is the related source code https://github.com/sequenceiq/cloudbreak/blob/rc-1.6/shell/src/main/java/com/sequenceiq/cloudbreak/shell/commands/common/ClusterCommands.java#L79 which can help you to deploy kerberized clusters. These are the related config parameters: --enableSecurity --kerberosMasterKey --kerberosAdmin --kerberosPassword Br, Richard
... View more
10-02-2016
06:21 PM
@Chris Gambino Currently you can not create autoscaling policies with CLI but here is the link for the rest api on the hosted Cloudbreak: https://cloudbreak.sequenceiq.com/as/api/index.html Br, Richard
... View more
- « Previous
-
- 1
- 2
- Next »