Member since
01-07-2019
217
Posts
135
Kudos Received
18
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1931 | 12-09-2021 09:57 PM | |
1864 | 10-15-2018 06:19 PM | |
9240 | 10-10-2018 07:03 PM | |
4026 | 07-24-2018 06:14 PM | |
1478 | 07-06-2018 06:19 PM |
05-02-2017
08:29 PM
The error happens when you use default settings, just specify cluster name and Ambari password.
... View more
05-01-2017
06:25 PM
2 Kudos
In case this helps, back in March I got this error when using HDP 2.5 EDW-ETL in us-east. In my case, for some reason the infrastructure creation took too long (1 hour instead of 5 minutes) and the security token expired during that time. In addition to what Jeff said, it may help to check the CloudFormation UI on the AWS console, find your stack and post the data from the “Events” and “Resources” tabs.
... View more
04-20-2017
06:42 PM
Hi @Namit Maheshwari Setting fs.defaultFS permanently to s3a is not recommended.
... View more
04-17-2017
06:15 PM
@Paul ski It seems like you are using HWX Sandbox. Have you tried using "maria_dev "as Ambari user/password? https://hortonworks.com/hadoop-tutorial/learning-the-ropes-of-the-hortonworks-sandbox/
... View more
04-05-2017
09:28 PM
Nevermind, I think I got it. This defines the provisioning strategy in case the cloud provider cannot allocate all the requested nodes.
... View more
04-05-2017
09:19 PM
@rkovacs How about this one: Under "Choose Failure Action > "Minimum Cluster Size" you can select **best effort** or **exact**. What does this option mean?
... View more
04-05-2017
06:24 PM
I am trying to set up Cloudbreak 1.14.1 on Azure and am wondering what the following advanced parameters mean. I cannot find them in the Cloudbreak product documentation. 1) Under Configure cluster: a) Provision Cluster (SALT appears
as the only option) -- ? b) Enable Lifecycle Management -
? 2) Configure Ambari and HDP repos - ? Thanks!
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
04-05-2017
05:25 PM
5 Kudos
HDCloud for AWS general availability version 1.14.1 is now available, including six new HDP 2.6 and Ambari 2.5 cluster configurations and new cloud controller features. If you are new to HDCloud, you can get started using this tutorial (updated for 1.14.1). Officail HDCloud for AWS documentation is available here. HDP 2.6 and Ambari 2.5 The following HDP 2.6 configurations are now available: For the list of all available HDP 2.5 and HDP 2.6 configurations, refer to Cluster Configurations documentation. Resource Tagging When creating a cluster, you can optionally add custom tags that will be displayed on the CloudFormation stack and on EC2 instances, allowing you to keep track of the resources that cloud controller crates on your behalf. For more information, refer to Resource Tagging documentation. Node Auto Repair The cloud controller monitors clusters by checking for Ambari Agent heartbeat on all cluster nodes. If the Ambari Agent heartbeat is lost on a node, a failure is reported for that node. Once the failure is reported, it is fixed automatically (if auto repair is enabled), or options are available for you to fix the failure manually (if auto repair is disabled). You can configure auto repair settings for each cluster when you create it. For more information, refer to Node Auto Repair documentation. Auto Scaling Auto Scaling provides the ability to increase or decrease the number of nodes in a cluster according to the auto scaling policies that you define. After you create an auto scaling policy, cloud controller will execute the policy when the conditions that you specified are met. You can create an auto scaling policy when creating a cluster or when the cluster is already running you can manage the auto scaling settings and policies. For more information, refer to Auto Scaling documentation. Protected Gateway HDCloud now configures a protected gateway on the cluster master node. This gateway is designed to provide access to various cluster resources from a single network port. Shared Druid Metastore (Technical Preview) When creating an HDP 2.6 cluster based on the BI configuration, you have an option to have a Druid metastore database created with the cluster, or you can use an external Druid metastore that is backed by Amazon RDS. Using an external Amazon RDS database for a Druid metastore allows you to preserve the Druid metastore metadata and reuse it between clusters. For more information, refer to Managing Shared Metastores documentation. The features are available via cloud controller UI or CLI.
... View more
04-05-2017
05:16 PM
You can try HDP 2.6 and Ambari 2.5 in HDCLoud for AWS 1.14.1. Get started here.
... View more