Member since
07-25-2019
184
Posts
42
Kudos Received
39
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2245 | 03-25-2019 02:25 PM | |
1128 | 01-22-2019 02:37 PM | |
1265 | 01-16-2019 04:21 PM | |
2654 | 10-17-2018 12:22 PM | |
1334 | 08-28-2018 08:31 AM |
03-05-2018
10:37 AM
2 Kudos
@MARTIN GATTO Cloudbreak is the main offering, HDCloud (HDC) was aimed for an easier, more prescriptive user experience in AWS for typical use-cases. The new 2.x line of Cloudbreak has a revamped UI/UX which makes this difference negligible. The key things is the following: the back-end of HDCloud is Cloudbreak. Cloudbreak has long-term support, 2.4 GA has just come out from feature perspective, Cloudbreak offers a superset of features of HDC except for Shared Services which is already on the roadmap both have CLI options So the bottom line is if you are to create a POC now, you should go with Cloudbreak. Hope this helps!
... View more
02-28-2018
03:04 PM
@kskp Maybe you might try it out on a newer Hadoop version. As of HDP 2.6.1, it contains Hadoop 2.7.3, which contains a known bug very similar to yours. Hope this helps!
... View more
02-21-2018
02:25 PM
@Paramesh malla Yes, you can utilize the recipe functionality of Cloudbreak like documented here.
... View more
02-21-2018
02:18 PM
@Paramesh malla New images can only be applied to new clusters so it is not possible to replace an image of a running cluster. Hope this helps!
... View more
02-14-2018
12:53 PM
@Abhishek Sakhuja This seems to be an issue with HDFS not calculating free spaces correctly when not used as defaultFS. Unfortunately, this is not an official setup supported by Hortonworks lately. "When working with the cloud using cloud URIs do not change the value of fs.defaultFS to use a cloud storage connector as the filesystem for HDFS. This is not recommended or supported. Instead, when working with data stored in S3, ADLS, or WASB, use a fully qualified URL for that connector." Therefore you might fix this by forking Cloudbreak code, modifying this part to meet your needs building your version of cloudbreak.jar copying it into cbreak_cloudbreak_1 container restarting the application. You might consider opening an issue in Cloudbreak github repo, the team will investigate it deeper then.
... View more
02-09-2018
01:03 PM
@Nuno Nunes Ok, glad to hear that you get it working. In the meantime, as your question has been answered, would you please consider accepting it?
... View more
02-09-2018
12:48 PM
@Bob Thorman As we have already spoken, the root cause was a bug in Cloudbreak when no default VPC was present in the AWS account. The solution is to include the following line in his Profile file and do a "cbd restart" afterwards. export DOCKER_TAG_CLOUDBREAK=1165-patch
... View more
02-05-2018
09:59 PM
@Nuno Nunes Have you been able to try that out?
... View more
02-01-2018
09:14 AM
@Nuno Nunes Actually you can try out our 2.3 RC version, hopefully it contains the solution for your problem. You can install the deployer: curl -Ls https://4383-32450069-gh.circle-artifacts.com/0/tmp/circle-artifacts.YypAJ5E/cbd-linux.tgz | sudo tar -xz -C /bin cbd
cbd version After that, if you use an existing network and select "Don't Create Public IP" option during cluster install, Cloudbreak won't create any NSG's. Hope this helps!
... View more
01-31-2018
03:33 PM
@Nuno Nunes Unfortunately this feature is not supported in Cloudbreak yet, it is in the roadmap though. One workaround is to write and apply a post-install recipe with which you delete the security group after cluster install. Another one is to fork Cloudbreak and remove the NSG related part (I know it is quite cumbersome..) Hope this helps!
... View more