Member since
06-13-2016
5
Posts
2
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
927 | 01-05-2017 06:58 AM | |
865 | 12-22-2016 06:30 AM |
01-05-2017
06:58 AM
You can access the cloudbreak API documentation here https://cloudbreak.sequenceiq.com/cb/api/index.html
... View more
12-25-2016
02:45 PM
3 Kudos
you could also do the same via cloudbreak shell using below command. cluster create --version 2.X --stackRepoId HDP-2.X --stackBaseURL http://s3.amazonaws.com/dev.hortonworks.com/HDP/centos7/2.x/BUILDS/2.X.X.0-154 --utilsRepoId HDP-UTILS-1.1.0.21 --utilsBaseURL http://s3.amazonaws.com/dev.hortonworks.com/HDP-UTILS-1.1.0.21/repos/centos7 --stack HDP --verify true --os redhat7 --ambariRepoGpgKey http://s3.amazonaws.com/dev.hortonworks.com/ambari/centos6/2.x/BUILDS/2.X.X.0-524/RPM-GPG-KEY/RPM-GPG-KEY-Jenkins --ambariRepoBaseURL http://s3.amazonaws.com/dev.hortonworks.com/ambari/centos6/2.x/BUILDS/2.X.X.0-524 --ambariVersion 2.X.X.0-524 --enableSecurity true --kerberosMasterKey master --kerberosAdmin admin --kerberosPassword admin --wait true
... View more
12-22-2016
01:42 PM
3 Kudos
@gvenkataramanan Please try the blueprint. small-default-hdp-corrected.txt . Issue was the NameNode default heapsize of 1024 is causing issue. Overriding it to 2048 resolves the issue. Regarding HDFS datanode configured to /hadoopfs by default ? that is how cloudbreak is designed i believe so that datanode have seperate dedicated disks.
... View more