Member since
07-25-2019
184
Posts
42
Kudos Received
39
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1195 | 03-25-2019 02:25 PM | |
639 | 01-22-2019 02:37 PM | |
608 | 01-16-2019 04:21 PM | |
1260 | 10-17-2018 12:22 PM | |
763 | 08-28-2018 08:31 AM |
05-23-2019
12:55 PM
@Steven Senior Cloudbreak does support configuring ADLS gen2 (abfs) for HDP, please have a look at the docs: https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.9.0/cloud-data-access/content/cb_configuring-access-to-adls2.html Hope this helps!
... View more
05-07-2019
09:04 AM
@Jorge Martínez Reséndiz Sorry for the late response. The easiest solution seems to be to use a recipe to upgrade NIFI during cluster install via Cloudbreak. Here is the doc: https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.9.0/advanced-cluster-options/content/cb_creating-custom-scripts-recipes.html Hope this helps!
... View more
03-25-2019
04:49 PM
@heta desai Typically, Ranger is used for authorization across the different components of the HDP stack, so it might be more of a question of Ranger's integration capability. Cloudbreak uses Workspaces to provide high level authorization capabilities starting from version 2.8.0, but it provides that capability built-in. Hope this helps!
... View more
03-25-2019
02:25 PM
@Nathaniel Vala This might be of help.
... View more
02-01-2019
03:26 PM
@Pushpak Nandi No, according to the documentation, you should create an autoscaling group with an admin user with enough rights and ensure that " AWSServiceRoleForAutoScaling " role has been created automatically. After this has succeeded your cluster create should proceed further. Hope this helps!
... View more
01-24-2019
09:11 AM
@Pushpak Nandi According to the AWS documentation, this might be your issue: "Amazon EC2 Auto Scaling creates the AWSServiceRoleForAutoScaling service-linked role for you the first time that you create an Auto Scaling group but do not specify a different service-linked role. Make sure that you have enabled the IAM permissions that allow an IAM entity (such as a user, group, or role) to create the service-linked role. Otherwise, the automatic creation fails. For more information, see Service-Linked Role Permissions in the IAM User Guide or the information about required user permissions in this guide." Hope this helps!
... View more
01-22-2019
02:37 PM
Hi @Anton Zadorozhniy Currently there is no such fine grained authorization implemented in Cloudbreak and regions, VPC-s and subnets are fetched on the fly from the cloud provider. What you can do is to create role based credential with a specific AWS role constrained to only specific resources (instead of "Resource": [
"*"
]) You can specify resource type level authorization permissions in Cloudbreak with the help of workspaces, resources will be shared inside a workspace. Hope this helps!
... View more
01-17-2019
04:56 PM
2 Kudos
@Pushpak Nandi HDP 3 support is to come with the Cloudbreak 2.9 version to be released in the very near future (no exact date available yet). Hope this helps!
... View more
01-16-2019
04:21 PM
1 Kudo
@Manu A I assume you are using Cloudbreak. If so, then the AWS tag value is validated against this pattern, which fails because the value starting with aws. The documentation shows that aws: prefix is reserved. Hope this helps!
... View more
12-13-2018
02:06 PM
@Yi Zhang Sorry for the late answer. Cloudbreak currently does not support burning images starting from a managed image out of the box. You can quite easily do that by modifying the HW official packer.json a little following the docs. Set the parameters "custom_managed_image_name" and "custom_managed_image_resource_group_name" Remove the parameters image_publisher, image_offer, image_sku, or image_version Start the build as usual Hope this helps!
... View more
11-28-2018
08:47 AM
@Ben Ybarra AFAIK, right now it is not in the roadmap. Hope this helps!
... View more
11-15-2018
09:51 AM
@Renaud Manus What kind of documentation are you referring to? As per the Cloudbreak docs, in the latest CB 2.8.0 version, Ambari and HDP upgrade on long-running clusters is not yet supported, that feature will be part of the upcoming release. Hope this helps!
... View more
11-08-2018
08:45 AM
@Phil Scott@Joe Diolosa@andrew chen Unfortunately there was an unrelated issue in our 2.7.3.-rc.4 build causing an error, which we have corrected in 2.7.3-rc.21, so that is expected to work. Sorry about the inconvenience!
... View more
11-06-2018
12:33 PM
@Sachin Shinde This cross-project resource management is currently not supported by Cloudbreak. I would suggest you to contact your company's Hortonworks representative, who can then create a feature request for Cloudbreak to support this in the near future. Hope this helps!
... View more
10-31-2018
02:31 PM
@Stefan Garrard You are right, the docker image build was not completed for that RC build. Could you please try it out with 2.7.3-rc.16? Sorry for the inconvenience!
... View more
10-29-2018
01:21 PM
@Stefan Garrard You can upgrade to the newest version containing the fix following these steps: 1.Navigate to your deployment directory, typically /var/lib/cloudbreak-deployment 2.Edit and then run the following curl command: export CBD_VERSION=2.7.3-rc.6
curl -Ls public-repo-1.hortonworks.com/HDP/cloudbreak/cloudbreak-deployer_${CBD_VERSION}_$(uname)_x86_64.tgz | tar -xz -C /bin cbd 3.Verify the version: cbd version 4.Next, restart Cloudbreak by using: cbd restart Hope this helps resolving your issue!
... View more
10-26-2018
01:05 PM
@Phil Scott @Joe Diolosa @andrew chen Sorry for the late response, your observation was right, there was a remote update by Google in the launched instances which resulted in Cloudbreak stopping after around one hour. The fix is already merged. Could you please try out 2.7.3-rc.4 version, which already contains the fix? Hope this helps & sorry for the inconvinience.
... View more
10-25-2018
08:38 PM
@navdeep agarwal Although I don't have terraform layout, I would propose the following approach instead: Set up external DB for Cloudbreak somewhere and create the 3 required databases (2nd point) Launch Cloudbreak quick-start template based on AWS Cloudformation Edit Cloudformation template in AWS editor modify "cbdprofile" part Add required variables of 3rd point Create stack with the modified template This should do it, the 1st point can be automated also. Hope this helps!
... View more
10-25-2018
09:48 AM
@Stefan Garrard Your issue is valid, I've opened a PR with the fix: https://github.com/hortonworks/cloudbreak/pull/4086 May I ask which version of Cloudbreak are you using?
... View more
10-24-2018
03:59 PM
@navdeep agarwal You are right about your presumption, HA services cannot be proxied. You can find the relevant piece of code here: https://github.com/hortonworks/cloudbreak/blob/master/core/src/main/java/com/sequenceiq/cloudbreak/service/ServiceEndpointCollector.java#L137-L141 Hope this helps!
... View more
10-17-2018
12:22 PM
@Jakub Igla It is on the roadmap, but I cannot share an exact schedule. Do you happen to have some issues with the storage account based image burning except for it being deprecated? Thanks!
... View more
09-27-2018
08:02 AM
@Phil Scott Could you please attach the output of the following to the case: cd /var/lib/cloudbreak-deployment
cbd ps
cbd create-bundle That will contain all the logs necessary to find out what happened, without any sensitive info in them. Hope this helps!
... View more
09-05-2018
11:43 AM
We can provide you some suggestions if you have a specific issue or error that you document here in the question with logs. Otherwise it is pretty difficult to provide any guidance in addition to the docs.
... View more
09-05-2018
11:20 AM
@Asvin
Kumar
You should try issuing the command with Trace: TRACE=1 cbd --help This way you can troubleshoot what exactly is failing. In the meantime I recommend you to use the latest CBD deployer version of 2.7.1. Hope this helps!
... View more
08-28-2018
08:31 AM
@Pankaj Singh This is a two-step process currently, meaning that after successfully stopped, you will have the "Start" option to restart your cluster. https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.7.1/content/clusters-manage/index.html#restart-a-cluster Hope this helps!
... View more
08-28-2018
08:20 AM
@Pankaj Singh According to the documentation, it is a new feature of HDF 3.2. to be co-located and managed by a single Ambari "HDF and HDP can now be run in the same ecosystem. This means that HDF and HDP can be managed using the same version of Ambari – all security policies can be shared from a single Ranger instance, components can share a common security gateway via a single Knox instance and a single Apache Atlas instance can be used for all the metadata and governance services for both HDP and HDF components. Managing both components in the same instance ensures that errors are limited and there is no longer an operational burden to keep two instances of Ambari in sync." There is no such blueprint in Cloudbreak yet, but until then you can start with an HDP cluster and complete the other steps necessary via recipes. Hope this helps!
... View more
08-22-2018
11:38 AM
@James Theoretically, recipe execution has no relation to the HDP stack, it is a Cloudbreak feature, so it should work if it was working for the previous HDP version What is your error? Can you provide UI logs and logs of cluster instance from /var/log/recipes?
... View more
08-22-2018
11:26 AM
@Jakub Igla Your use case is perfectly valid, but unfortunately Cloudbreak does not support it yet (it is in the roadmap though) As a workaround, you can do the following: Add this fragment to the Profile export CB_JAVA_OPTS="-Dcb.arm.template.path=arm-v2.ftl" Modify the file arm-v2.ftl relevant to the version of Cloudbreak you are using with the missing "Plan" parameters Save the file on your Cloudbreak machine in the /var/lib/cloudbreak-deployment/etc directory. Restart Cloudbreak After restart has completed, all the subsequent cluster launches are to use your new, customized ARM template! Hope this helps!
... View more
08-22-2018
07:39 AM
@Wei Law If you consider your question answered, could you also consider accepting the answer? Thank you!
... View more
08-21-2018
03:56 PM
@Santosh
mirajkar
It is a feature of the HDP stack to be able to configure "Cloud Storage Connectors", which allow you to access and work with data stored in Amazon S3, Azure ADLS and Azure WASB storage services, and Google Cloud Storage. Cloudbreak can automate the configuration of these connectors, it provides a wizard step for the connector relevant to the particular cloud provider (e.g. s3 for AWS, GCS for Google Cloud). The other connector can be configured in a recipe. You can then use a distcp to transfer data from one to another. Hope this helps!
... View more