Member since
07-25-2019
184
Posts
42
Kudos Received
39
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1014 | 03-25-2019 02:25 PM | |
550 | 01-22-2019 02:37 PM | |
493 | 01-16-2019 04:21 PM | |
1034 | 10-17-2018 12:22 PM | |
661 | 08-28-2018 08:31 AM |
01-04-2018
05:11 PM
@Pravin Dsilva Which version of OpenStack are you using? These versions are supported by Cloudbreak now. A known issue is present since some time in OpenStack, causing that "description" has to be more than 1 char but the field itself is not mandatory. (here, here and here). It is not being corrected though. Cloudbreak is sending it with empty description, so I am filing a ticket to investigate this and getting back to you with the updates.
... View more
12-29-2017
10:28 AM
@Airawat Sorry for such a long response time. Kerberos support was in TP in Cloudbreak and it was completely refactored in version 2.2. Our team would be very helpful if you could try to reproduce the issue in v2.2. It is not listed in Marketplace but can be installed from an ARM template. The Kerberos doc for Cloudbreak 2.2 is here. One thing to note is as the UX of Cloudbreak is renewed completely, Availability set support is to be included in the yet to released Cloudbreak 2.3 version only. Thank you for your support!
... View more
12-29-2017
10:04 AM
@Cibi Chakaravarthi If your original question was answered, would you please consider accepting the answer? Thanks!
... View more
12-29-2017
09:51 AM
@angela lukas You are right, this seems to be a bug in that version. Even if you place a config file, it might happen that the CLI could not read that because of this issue. May I ask if you have a specific use-case to use HDC over Cloudbreak? The latter has a CLI, supports AWS and supersets HDC from a feature perspective. Hope this helps!
... View more
12-15-2017
01:27 PM
@Airawat If you consider your original question answered, would you please consider accepting the answer? Thanks!
... View more
12-12-2017
02:11 PM
@Abhishek Sakhuja Do you have any updates on this one, have you managed to get this working? If you consider your original question answered, would you pls. consider accepting the answer?
... View more
12-12-2017
01:53 PM
@Airawat Have you had the chance to have a look at this one? Thanks!
... View more
12-12-2017
01:52 PM
@Cibi Chakaravarthi I suggest you to try with the new 1.16.5 version as it has this part of code refactored. The update should not affect your running clusters and it can be run with one command "cbd update". Hope this helps!
... View more
12-06-2017
06:22 PM
@Cibi Chakaravarthi I suppose you are using managed disks, if so then this is a known issue which got fixed in Cloudbreak release 1.16.5. You can try to update following the documentation or you might try to launch Cloudbreak 1.16.5 from Azure Marketplace. Sorry for the inconvenience & I hope this helps!
... View more
12-06-2017
04:35 PM
@Airawat Yes, it would be useful to support that and it is on our roadmap, too, but as the encryption process can take between 3-16 hours to finish (typically around 10 hours in our trials), the feature implementation is currently on hold. If the performance would get better, we will reconsider supporting this. Your thoughts are welcome on this topic.
... View more
12-06-2017
01:34 PM
@Airawat I've just double-checked the new Cloudbreak 1.16.5 version in Marketplace and it seems to have the correct version: Could You please verify? Thanks!
... View more
12-06-2017
01:00 PM
@Cibi Chakaravarthi There should be some useful information in the logs (there is no sensitive data in it), so please attach it to the case to be able to investigate.
... View more
12-06-2017
10:49 AM
@Abhishek Sakhuja The error message is an indication of your HDFS running out of space. The amount of free space is fetched from Ambari and calculated like the following: def Map<String, Map<Long, Long>> getDFSSpace() {
def result = [:]
def response = utils.slurp("clusters/${getClusterName()}/services/HDFS/components/NAMENODE", 'metrics/dfs')
log.info("Returned metrics/dfs: {}", response)
def liveNodes = slurper.parseText(response?.metrics?.dfs?.namenode?.LiveNodes as String)
if (liveNodes) {
liveNodes.each {
if (it.value.adminState == 'In Service') {
result << [(it.key.split(':')[0]): [(it.value.remaining as Long): it.value.usedSpace as Long]]
}
}
}
result
} Please check Ambari UI, it can be that Ambari calculates the free space incorrectly. Hope this helps!
... View more
12-05-2017
09:48 AM
@Airawat Unfortunately the publish process was already ongoing when this issue was reported, you can check the changelog here. Thanks!
... View more
12-04-2017
11:44 PM
@Airawat Just FYI, the new 1.16.5 version has become live on Marketplace, you can try it out, it includes the Availability Set fix - one AS can be assigned to multiple hostgroups - and Premium Managed Disk support too. Hope it helps!
... View more
12-04-2017
02:42 PM
@Cibi Chakaravarthi What is your Cloudbreak version ("cbd version")? Could you attach some logs (cbreak.log file or the output of "cbd logs")?
... View more
12-03-2017
12:29 PM
@Airawat You are correct, this is temporary only because there is a new release of Cloudbreak 1.16.5 coming up to the Marketplace in the next few days once the publish review process is finished. Sorry for the inconvenience.
... View more
11-28-2017
02:02 PM
@prarthana basgod Sorry for the late response. If your problem still persists, could you please share the logs of the recipe run and the logs of Cloudbreak as well (cbd logs cloudbreak)?
... View more
11-28-2017
01:57 PM
@Maheswara Talla Cloudbreak itself is only configuring ADLS access (e.g. by placing properties in core-site.xml) for the cluster, but the necessary jar-s are part of the HDP implementation, so you should install a version that supports ADLS. You can find more information here. Hope this helps!
... View more
10-20-2017
08:55 PM
1 Kudo
@Airawat One can burn their custom images for HDP over Cloudbreak with the help of this repository, there is quite extensive documentation there. RHEL images can be burnt with setting the following parameters properly:
AZURE_IMAGE_PUBLISHER (OpenLogic|RedHat) AZURE_IMAGE_OFFER (CentOS|RHEL) AZURE_IMAGE_SKU (6.8|7.2) The new images can be registered into Cloudbreak without the need to publish them into Azure Marketplace. Hope this helps!
... View more
10-17-2017
09:52 AM
@Airawat Your issue seems to be valid but unfortunately currently this is not possible as each availability set can be assigned to at most one hostgroup. I have filed a ticket so this restriction will be removed in a future release.
... View more
10-12-2017
04:32 PM
@Anagha Khanolkar
Feature timing-wise, unfortunately I cannot make any statements so you should contact the sales or solution engineer contact of your customer.
... View more
10-12-2017
12:04 PM
2 Kudos
@Amey Hegde There are the two ways to achieve this as follows: The most cost effective way of doing this is by using Cloudbreak shell to automate the cluster create and tear-down and schedule it with e.g. cron. You can even start the application with a post recipe. The other way is using time based autoscaling and keep the minimum nodes alive. Post recipes will run on upscale. In this case consider that data nodes should not be subject of downscale. Hope this helps!
... View more
10-12-2017
08:03 AM
@Ali Mohammadi Shanghoshabad That first step has very little to do with Hortonworks, that's why I was not elaborating on it in detail. As a rule of thumb, if you can go with KVM hypervisor, you should go with it. The reasons are: It os the only officially supported hypervisor for OpenStack, most OpenStack development is done with the KVM hypervisor Cloudbreak supports KVM for OpenStack. The reference architecture contains KVM as well. Hope this helps!
... View more
10-11-2017
09:12 AM
@Tim Shephard You were right about WebHDFS not exposed, but HDFS UI is exposed via the NameNode service so it should be accessible if you enable "Protected Gateway Access to Cluster Components". If you would like to enable WebHDFS, there is a workaround: SSH to master node of the cluster Edit /srv/pillar/gateway/init.sls Add "WEBHDFS" to gateway:exposed like below: gateway:
address: 172.21.250.198
exposed: [WEBHDFS]
location:
.... After save, run salt '*' state.highstate (this will regenerate Knox topology) Hope this helps!
... View more
10-10-2017
05:06 PM
@Tim Shephard No, I have not written anything related the webhdfs in my answer, but actually both hdfs ui and webhdfs is supported and should work if checked in UI. Hope this helps!
... View more
10-09-2017
01:10 PM
@Ali Mohammadi Shanghoshabad You have this option which covers all the 3 requirements: Install and setup an OpenStack version compatible with Cloudbreak Install Cloudbreak Deployer and start Cloudbreak Create an OpenStack credential (which connects your OpenStack account with Cloudbreak) and infrastructure templates in Cloudbreak Create or reuse a blueprints matching your HDP workload type Launch cluster(s) Configure autoscale and add users via Ambari as needed Hope this helps!
... View more
10-06-2017
09:26 AM
@Tim Shephard Are you using your custom VPC and subnet? If so, have you checked this checklist: https://hortonworks.github.io/hdp-aws/security-vpc/index.html#configuring-your-own-vpc Hope this helps!
... View more
10-06-2017
08:17 AM
2 Kudos
@Anagha Khanolkar Good observation, at the point of the early adopter implementation, only Standard LRS storage was supported for managed disks, Premium LRS was introduced since then, so "Volume type" is hidden from the UI and going with . Premium managed disks will be supported in the near future version of Cloudbreak. Would you like to use Premium LRS? Hope this helps!
... View more
09-25-2017
08:48 AM
@Matt Andruff This cbdb=# select * from clustercomponent where id = 97 should be like this instead: cbdb=# select * from clustercomponent where cluster_id = 97 There should be 3 rows. And pls. write us the exact CB version as well! Thanks!
... View more