Member since
09-29-2015
40
Posts
10
Kudos Received
6
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1747 | 11-01-2017 03:25 PM | |
803 | 03-25-2017 11:15 AM | |
709 | 08-19-2016 01:38 PM | |
1272 | 08-18-2016 02:08 PM | |
637 | 05-12-2016 08:09 AM |
03-19-2019
09:21 AM
Cloudbreak is disable the ASG processes by default and use it when a scaling action has been triggered on it's API to do the necessary modification. Cloudbreak could not handle that situation when you enable all the processes on the ASG and AWS dynamically replaces nodes within the ASG. But Cloudbreak autoscaling functionality checks the status of every available node(Ambari agent) and will mark the deleted instances on the UI that manual action required. In this case you can run synch or delete manually those instances from the UI that will delete the instacnes from provider side if still exists some resource of them and then delete the related metadata. By modifying the size of the cluster you can add new instances. Also synch functionality could be used to check that the Cloudbreak metadata is up to date with the provider side instances.
... View more
08-10-2018
08:16 AM
Hi @Sachin
Shinde, Cloudbreak has a functionality to configure custom hostname on the provisioned clusters, but it hasn't been documented yet because it is only available through our CLI tool and it could be configured in the cluster JSON template like ("customDomain" key): {
"general": {
"credentialName": "mycred",
"name": "myCluster-1"
},
"customDomain": {
"customDomain": "hortonworks.com",
"customHostname": "prod"
},
"placement": {
"availabilityZone": "eu-west-1a",
"region": "eu-west-1"
},
........ Other possible solution on AWS: https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.7.1/content/hostnames/index.html On the other hand it is not a good idea to manipulate the hostnames manually during the instance creation, because Cloudbreak provisioned instances has a custom DHCP hook script that will configure our underlying DNS(Unbound) with the configs that come from the DHCP server. Br, Tamas
... View more
08-08-2018
04:14 PM
Hi @Sachin
Shinde, Cloudbreak has a recipe functionality that is designed to run custom bash/python scripts at different lifecycles of the cluster: https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.7.1/content/recipes/index.html Br, Tamas
... View more
08-03-2018
01:46 PM
Hi @Sachin
Shinde
Most of the time we worked with Centos and we have RedHat based images for many providers due to customer requirement. We haven't known about any issue.
... View more
08-01-2018
04:34 PM
Our images in the tar.gz format also around 1 Gig (fox example: https://storage.cloud.google.com/sequenceiqimage/cb-hdp--1808011112.tar.gz?authuser=1&folder&organizationId&_ga=2.24157310.-577219106.1530691777) I haven't seen any special config, but our tar.gz is public: The copy method is called on the GCP SDK, but that is timed out for you, so probably was able to reach the source object. As I checked the rewrite method shouldn't be timed out, but before starting to find the issue in the code base, could you please check to start a cluster in a different region? Our processes burns the images and at the first use of the image, Cloudbreak copy the image from the tar.gz format. So we should use the same method call for every new image and haven't experienced the same issue. Storage.Objects.Copy copy(@NotNull String sourceBucket,
@NotNull String sourceObject,
@NotNull String destinationBucket,
@NotNull String destinationObject,
com.google.api.services.storage.model.StorageObject content)
... View more
08-01-2018
01:26 PM
Hi @Sachin
Shinde
Fist of all, why have you decided to go ahead without our image burning process? It may happen that the custom image burning process will skip some of the required prerequisites and on the other hand our solution is extendable with custom logic. What is the size of the created image? We haven't experienced similar with our images, they are between 15 and ~30 Gigs. Br, Tamas
... View more
06-07-2018
04:57 PM
@Jakub Igla The post-cluster-install recipe could work but looks like a dirty workaround because the Ambari credentials are needed in the script to be able to communicate with the Ambari server. Cloudbreak add the "dfs.datanode.data.dir" to the hostgroups array for every hostgroup in a configuration array section, you could add the attached disks data dirs to the hostgroup that contains the datanodes, this way: "host_groups": [
{
"name": "master",
"configurations": [
{
"hdfs-site": {
"dfs.datanode.data.dir": "/hadoopfs/fs1/hdfs/datanode"
}
}
],
"components": [
{
"name": "APP_TIMELINE_SERVER"
},
{
"name": "HCAT"
},
{
"name": "HDFS_CLIENT"
},
{
"name": "HISTORYSERVER"
}
]
},
{
"name": "compute",
.......
... View more
06-07-2018
12:25 PM
1 Kudo
It worked for us if we updated the "data.dir" in Ambari and restart the necessary services. Updating the "DataNode directories" doesn't solve the issue, we had to click to "Switch to 'hdp26-data....k2:worker'" and update the settings for the hostgroup that contains the datanodes in that text-area that was not editable by default. Then a Save and restart updated the available dfs space in Ambari. Could you please check the mentioned steps again? We didn't run any additional dfs related command.
... View more
06-07-2018
10:47 AM
1 Kudo
Hi @Jakub Igla, After you updated the "dfs.datanode.data.dir" property in Ambari and saved the config then you should restart the entire HDFS service to your modification take effect on your cluster. It should work automatically if you extended your blueprint with the necessary configs. About this issue I created a blocker ticket, so it should be fixed soon. Here you can find the release notes of 2.6.0 mostly cover the new features compared to the 2.4.2 version: https://docs.hortonworks.com/HDPDocuments/Cloudbreak/Cloudbreak-2.6.0/content/releasenotes/index.htm The downgrading is not a supported feature so I just could recommend you to create a new Cloudbreak deployment to play with version 2.4.2.
... View more
06-06-2018
12:23 PM
Hi @Jakub Igla, You are checking the HDFS disk space which doesn't equal with the sum of all the attached disks. HDFS is a redundant file system and replicates the data on the attached disks. So you couldn't use the entire size of the disks, except you configure the Block replication to lower then the default which is I think 3, but it is not recommended. You can check the "Block replication" value under the HDFS config -> advanced page on Ambari UI.
... View more
06-05-2018
07:12 AM
Hi @Bimal Mehta As the thread about not able to open the Ambari UI, you need to run the script on the node where the Ambari server is running. You should ssh to that instance run the script that will generate a new certificate for the machine's nginx with the right IP address to avoid this kind of certificate issues.
... View more
04-16-2018
03:28 PM
Hi @Binbin
Hao
You don't have to specify a private key because Cloudbreak creates it's own for every cluster and only use it to temporarily start and configure the gateway services on the master node. After the gateway services has been started the key will be deleted and Cloudbreak communicates with the cluster through two-way SSL only. Do you use your own image? If not, could you please send the id of the image that was used to create the cluster? The image id could be found on the details page of the cluster and clicking on the "IMAGE DETAILS" tab. Br, Tamas
... View more
04-06-2018
02:01 PM
Hi @Dominik Kappel, OpenStack versions are supported by Cloudbreak. Unfortunately Cloudbreak doesn't support the Queens version. I tested the image by following the docs on one of our supported environments and that worked fine. I would really help you but we don't have an Infra to do that.
If you have a support subscription, feel free to open a ticket and we will happy to assist you in the further investigation. Br, Tamas
... View more
04-05-2018
12:26 PM
@Dominik Kappel I double-checked and our OS could create instance from the imported image successfully. Could you please tell more about your OS infra, version?
... View more
04-03-2018
12:33 PM
Hi, @Shant Hovsepian the upgrade couldn't be done by a cbd command because we have only RC release that contains the necessary fix. You could install it by running the following curl command: curl -LO s3.amazonaws.com/public-repo-1.hortonworks.com/HDP/cloudbreak/cloudbreak-deployer_2.4.1-rc.27_$(uname)_x86_64.tgz @Huy Duong Cloudbreak manages Ambari and all of the cloud resources. In your case something went wrong at the orchestration or the Ambari cluster installation phase. This also could happen if you modify something config manually instead of waiting for Cloudbreak to install something. For any investigation the logs and the event history would be necessary, but it is definitely not related to the changes of Google APIs.
... View more
11-01-2017
07:36 PM
@Vadim Vaks You are welcome and thanks for the minimized query parameters.
... View more
11-01-2017
03:25 PM
1 Kudo
Hi @Vadim Vaks I tested with latest version of Cloudbreak and the "cbd util token" command is still works for me. But if you use the address of the proxy server that provides the SSL then you should use the "/cb" sub-path to send requests to the API and the endpoints could be found under "/api/v1" path like: curl -k -X GET -H "Authorization: Bearer $TOKEN" -H "Content-Type: application/json" https://192.168.99.100/cb/api/v1/stacks/user On the other hand your curl command is not parameterized properly. The URL would look like: http://192.168.99.100:8089/oauth/authorize?response_type=token&client_id=cloudbreak_shell&scope.0=openid&source=login&redirect_uri=http://cloudbreak.shell Br, Tamas
... View more
10-03-2017
08:05 AM
Could you please check the version of your Cloudbreak by running "cbd version" in the folder of the deployment(/var/lib/cloudbreak-deployment by default)? It looks like a bug and the required repo details hasn't sent properly from the UI side.
... View more
10-02-2017
08:09 AM
Hi @Henrique Silva, You are right, the Profile file "PUBLIC_IP" and "CB_TRAEFIK_HOST_ADDRESS" variables need to be updated in the "/var/lib/cloudbreak-deployment" folder on the VM of the deployment. Br, Tamas
... View more
10-02-2017
07:57 AM
1 Kudo
Hi @ryan xia, @Pranay Vyas is right, if you use Cloudbreak UI, you can change the Ambari repo details by using the advanced option in the cluster creation wizzard. And do not forget to check that the selected blueprint stack version is right, because the default ones could also be a 2.5 Br, Tamas
... View more
07-18-2017
09:15 AM
Hi @Anandha L Ranganathan, Maybe the best solution to debug this issue if you create a deployment and set the Options -> Advanced -> Rollback on failure option to false. In this case the deployment could to be deleted manually on the Cloudformation service of AWS after the debug has been finished. This way you can check the applied CF template, the created events and resources at the Cloudformation service. As I checked the referenced template we only use references for resources when create the Cloudformation template except the public route. The rule is dedicated to allow outgoing connections from the created cbd instance. But probably in your specific network setup that part is not working as we expected, so please check the route table and it's rules. I guess there should be rules that can block the outgoing connections or the Cloudformation reference to wrong gateway and route table in your VPC. "VPC" : {
"Type" : "AWS::EC2::VPC",
"Properties" : {
"CidrBlock" : "10.0.0.0/16",
"EnableDnsSupport" : "true",
"EnableDnsHostnames" : "true",
"Tags" : [
{ "Key" : "Application", "Value" : { "Ref" : "AWS::StackId" } }
]
}
},
"PublicSubnet" : {
"Type" : "AWS::EC2::Subnet",
"Properties" : {
"MapPublicIpOnLaunch" : true,
"VpcId" : { "Ref" : "VPC" },
"CidrBlock" : "10.0.0.0/24",
"Tags" : [
{ "Key" : "Application", "Value" : { "Ref" : "AWS::StackId" } }
]
}
},
"InternetGateway" : {
"Type" : "AWS::EC2::InternetGateway",
"Properties" : {
"Tags" : [
{ "Key" : "Application", "Value" : { "Ref" : "AWS::StackId" } }
]
}
},
"AttachGateway" : {
"Type" : "AWS::EC2::VPCGatewayAttachment",
"Properties" : {
"VpcId" : { "Ref" : "VPC" },
"InternetGatewayId" : { "Ref" : "InternetGateway" }
}
},
"PublicRouteTable" : {
"Type" : "AWS::EC2::RouteTable",
"Properties" : {
"VpcId" : { "Ref" : "VPC" },
"Tags" : [
{ "Key" : "Application", "Value" : { "Ref" : "AWS::StackId" } }
]
}
},
"PublicRoute" : {
"Type" : "AWS::EC2::Route",
"DependsOn" : [ "PublicRouteTable", "AttachGateway" ],
"Properties" : {
"RouteTableId" : { "Ref" : "PublicRouteTable" },
"DestinationCidrBlock" : "0.0.0.0/0",
"GatewayId" : { "Ref" : "InternetGateway" }
}
},
"PublicSubnetRouteTableAssociation" : {
"Type" : "AWS::EC2::SubnetRouteTableAssociation",
"Properties" : {
"SubnetId" : { "Ref" : "PublicSubnet" },
"RouteTableId" : { "Ref" : "PublicRouteTable" }
}
},
... View more
07-13-2017
04:34 PM
1 Kudo
Hi @Anandha L Ranganathan, From the last error I think @Dominika Bialek is right and the HDC Controller could not to connect to the RDS service due to network, security rule limitations. On the other hand could you please give a try with the https://aws.amazon.com/marketplace/pp/B01LXOQBOU?qid=1499963967598&sr=0-2&ref_=srh_res_product_title templates. From the attached pdf and the last comment's logs it looks like you are still on the 1.14.4 version. You could also checked from the controller that the RDS service is reachable by running the following commands. Check the domain name: nslookup hdc-test.cluster-czvrt6ojpbos.us-west-2.rds.amazonaws.com Check the port is open on the specified machine: telnet hdc-test.cluster-czvrt6ojpbos.us-west-2.rds.amazonaws.com 3306 Br,
Tamas
... View more
07-10-2017
07:35 AM
1 Kudo
Hi @Anandha L Ranganathan Could you please check what @Dominika Bialek has been recommended? On the other hand if you create a new deployment through the Cloudformation wizzard please set the value of the Options -> Advanced -> "Roll back on failure" to false. Then the Cloudformation won't roll back the resources when something fails and you will be able to SSH to your instance and check the logs of the deployment in the folder "/var/lib/cloudbreak-deployment" by running "cbd logs".
Please attach mentioned logs if you have created an HDC deployment with the mentioned additional configs and please also attach the result of the "cbd ps" command. Thanks, Tamas
... View more
04-19-2017
03:05 PM
Hi @Peter Teunissen I have checked and the image exist in the storage account, link to that: https://sequenceiqwesteurope2.blob.core.windows.net/images/hdc-hdp--1703101455.vhd Could it be possible that your deployment is not able to reach the vhd because of some custom security rule? Have you tried install a cluster again maybe just some Azure service was unavailable that time? Br, Tamas
... View more
03-25-2017
11:15 AM
1 Kudo
Hi @dbalasundaran, Default resources like Blueprints are created at the first GET request to the desired resource endpoint. When you used the UI, the default resources were created that time. After that the hdc create-cluster probably worked. You always needs to trigger the creation of default blueprints if you use them. One option is to use the UI, other option is to run "hdc list-cluster-types" that will create the default blueprints for you.
... View more
01-10-2017
04:14 PM
Hi @jhals99, I do not know which level you need exactly, but if you create a support ticket or get in touch with the Solution Engineer(SE) on the project should solve this question. LUKS encryption should work but we haven't tried it, so I am not 100% sure about that. As I know we haven't test S3A from this approach. I think you should create a question in the Hadoop and or HDFS topics. Maybe the JIRA-s in the S3A docs could be a good start for searching the right contact for these questions. Br, Tamas
... View more
10-18-2016
03:42 PM
Hi @cduby, What kind of users would you like to create automatically with Cloudbreak? Cloudbreak has a "recipes" feature that is able to run scripts after the cluster installation has been finished. Maybe it could create the desired users for you: http://sequenceiq.com/cloudbreak-docs/latest/recipes/ Br, Tamas
... View more
08-19-2016
01:38 PM
1 Kudo
Hi @Kenneth Graves, Please check that the defective node is in a slave/worker hostgroup, because the following solution based on Cloudbreak upscale functionality that only works on slave/worker hostgroups: First you should sync the cluster state, sync could be triggered through the Cloudbreak UI when the "cluster details" panel has been opened. After the sync has been applied the defective node should be marked on the node list of the "cluster details" panel and a terminate button should appeared in the last column. To replace the terminated node, just simply upscale the previously terminated node's hostgroup with new nodes. If the defective node is in a master hostgroup (that contains master components) then I am afraid there is no possible solution to replace the node through Cloudbreak and there is no easy way to replace it in Cloudbreak. Br,
Tamas
... View more
08-19-2016
11:48 AM
Hi @Aengus Rooney I tried to reproduce your issue with the linked template without success. As @Michael Young mentioned you could find the Cloudbreak deployment Profile and most of the logs under '/var/lib/cloudbreak-deployment'. But we also use cloudinit on the VMs under the hood. It looks like the initialization of the deployment failed that is triggered by the cloud init.
So could you please share with us the log under '/var/log/cbd-quick-start.log'? Br, Tamas
... View more
08-19-2016
07:21 AM
Hi @KC Yes, you are right. The remote access could prevent accessing the instances. Especially in this case the Control Plane that runs the Cloudbreak under the hood could not SSH to the provisioned VMs because it has different IP than yours. We have a huge timeout for this phase of the installation because in case of huge clusters it takes time to ssh to every instances(Also the boot of instances requires more time.). But you are right, we have to find a solution to identify this kind of situation in a quicker way. I will raise an issue for this. Br, Tamas
... View more