Member since
09-25-2015
142
Posts
58
Kudos Received
25
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
825 | 01-29-2019 12:44 PM | |
681 | 03-31-2018 08:10 AM | |
4127 | 03-30-2018 07:55 PM | |
851 | 09-12-2017 01:52 PM | |
1117 | 09-05-2017 05:48 PM |
05-07-2019
10:54 AM
Cloudbreak supports resizing HDF cluster but not those hostgroups which has NIFI service.
... View more
01-29-2019
12:44 PM
Hi @Anton Zadorozhniy Currently this is not possible in cloudbreak but it is a great feature request and I will create a Jira for these. Thanks
... View more
07-19-2018
02:39 PM
@challa vinitha cluster details screen-shot-2018-07-19-at-43331-pm.png screen-shot-2018-07-19-at-43337-pm.png
... View more
07-18-2018
12:12 PM
1 Kudo
@challa vinitha You can do that manually but you can also configure autoscaling policies on your cluster which based on time, or based on metrics which are exposed by ambari. Read more here: https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.7.1/content/autoscaling/index.html
... View more
03-31-2018
05:19 PM
1 Kudo
@Shant Hovsepian you can not use cbd update but you can update for the patch version as we mentioned. curl -Ls public-repo-1.hortonworks.com/HDP/cloudbreak/cloudbreak-deployer_2.4.1-rc.27_Linux_x86_64.tgz | sudo tar -xz -C /bin cbd and then cbd restart Br, R
... View more
03-31-2018
08:10 AM
@Leszek Leszczynski Unfortunately this is not possible currently so you have to recreate the whole cluster. As I know this is also a limitation on Ambari side. Br R
... View more
03-30-2018
07:55 PM
4 Kudos
@Leszek Leszczynski My first question is can you upgrade to 2.4? That is latest GA release and if you just now started testing Cloudbreak I suggest to use 2.4 Br R
... View more
03-28-2018
11:18 AM
@Muralidhar Adapala You are right we only support DS-series, DSv2-series, GS-series, Ls-series, Fs-series with premium storage. can you please contact with our support and open a support case? We are happy to help in this problem. Br, R
... View more
03-27-2018
08:07 PM
1 Kudo
@Muralidhar Adapala https://docs.microsoft.com/en-us/azure/virtual-machines/windows/premium-storage Premium Storage supports VM disks that can be attached to specific size-series VMs. Premium Storage supports DS-series, DSv2-series, GS-series, Ls-series, Fs-series and Esv3-series VMs For example Standard_D32s_v3 supports Premium storage Br, R
... View more
03-26-2018
11:05 AM
Hi @Craig Connell finally we get some response so working on the fix. we will let you know as soon as possible
... View more
03-23-2018
01:27 PM
I think you using some strange character in your template can you paste here what are you using? (for sure without any sensitive data) btw the stack name can contains only lowercase letters
... View more
03-21-2018
04:24 PM
@Brandon Gold what is your stackname ?
... View more
02-02-2018
05:31 PM
ok then you don't use hdc. Are you saying I have to pay for another HDP service to get this to work? No First of all please upgrade for latest GA release with 'cbd update' command.
... View more
02-02-2018
04:01 PM
@Bob Thorman I think you are using Hortonworks Data Cloud. If yes then please the instruction here follow https://docs.hortonworks.com/HDPDocuments/HDCloudAWS/HDCloudAWS-1.16.2/bk_hdcloud-aws/content/subscribe/index.html
... View more
09-29-2017
01:44 PM
In this documentation: https://hortonworks.github.io/hdp-aws/s3-configure/index.html <property>
<name>fs.s3a.aws.credentials.provider</name>
<value>org.apache.hadoop.fs.s3a.SharedInstanceProfileCredentialsProvider</value>
</property>
... View more
09-29-2017
01:38 PM
Can you please contact with our support team and raise an issue. We need some application log and I don't want to share here. Br, R
... View more
09-29-2017
01:13 PM
@Shyam Shaw Can please give us more context? Which version are you using ? Br, R
... View more
09-29-2017
12:33 PM
@Pradeep Bhadani You have to attach an AWS instance profile to every machine which you would like to use with s3. After that you are able to reach s3 with s3a:// Br, R
... View more
09-29-2017
11:37 AM
1 Kudo
@Pradeep Bhadani If you would like to use Cloudbreak to provision your HDP stack then here are the answers: 1. http://hortonworks.github.io/cloudbreak-docs/latest/aws/#advanced-options -> Instance Profile option 2. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/bk_cloud-data-access/content/s3-guard.html this is in TP in HDP 2.6 3. https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.1/bk_cloud-data-access/content/s3-get-started.html a documentation regarding S3 connector Br, R
... View more
09-27-2017
02:46 PM
@Matt Andruff please add --verify=true as parameter and thanks for reporting the issue
... View more
09-25-2017
08:55 AM
I just checked your blueprint file and the blueprint_name is missing in the blueprint config like this: https://github.com/hortonworks/cloudbreak/blob/master/integration-test/src/main/resources/blueprint/multi-node-hdfs-yarn.bp#L53
... View more
09-19-2017
01:41 PM
read this: https://hortonworks.github.io/hdp-aws/s3-hive/index.html
... View more
09-19-2017
12:42 PM
@Anandha L Ranganathan Can those machines reach the internet? what is your vpc config where you want to deploy your cluster?
... View more
09-12-2017
02:04 PM
1 Kudo
If you have 3 hostgroup then you have to configure 3 instancegroup as well
... View more
09-12-2017
01:52 PM
1 Kudo
@Matt Andruff How many hostgroup do you have in your blueprint? Which version of cbd shell are you using ? Br R
... View more
09-07-2017
06:55 PM
@Eder alves As I see in the examples the installation user is root but on your machine this is cloudbreak it could be a problem. Also please check the python version. That article using sandbox environment maybe that is different int he configuration than a cloudbreak installed cluster. Br, R
... View more
09-07-2017
06:51 PM
@Eder alves As I see in the examples the installation user is root but on your machine this is cloudbreak it could be a problem. Also please check the python version. That article using sandbox environment maybe that is different int he configuration than a cloudbreak installed cluster. Br, R
... View more
09-07-2017
12:53 PM
Please accept the answer if it solved your problem @Henrique Silva
... View more
09-06-2017
04:26 PM
@Julien Champ I think you are on the right track. This is how we doing things in Hortonworks Data Cloud. We have 3 type of nodes there: master, workers, computes and scaling the computes up and down. One thing which you should investigate that where you want to store your data because I probably suggest a cloud object store which is supported by Hortonworks so s3 (AWS) or adls (Azure). Br, R
... View more
09-06-2017
12:32 PM
1 Kudo
@Julien Champ I think this is the correct approach for your problem. I suggest to config enough cooldown time between scaling events because the hdfs data movement can be slow If you have too much data. Br, R
... View more