Member since
07-25-2019
184
Posts
42
Kudos Received
39
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2246 | 03-25-2019 02:25 PM | |
1128 | 01-22-2019 02:37 PM | |
1265 | 01-16-2019 04:21 PM | |
2659 | 10-17-2018 12:22 PM | |
1334 | 08-28-2018 08:31 AM |
12-05-2017
09:48 AM
@Airawat Unfortunately the publish process was already ongoing when this issue was reported, you can check the changelog here. Thanks!
... View more
12-04-2017
11:44 PM
@Airawat Just FYI, the new 1.16.5 version has become live on Marketplace, you can try it out, it includes the Availability Set fix - one AS can be assigned to multiple hostgroups - and Premium Managed Disk support too. Hope it helps!
... View more
12-04-2017
02:42 PM
@Cibi Chakaravarthi What is your Cloudbreak version ("cbd version")? Could you attach some logs (cbreak.log file or the output of "cbd logs")?
... View more
12-03-2017
12:29 PM
@Airawat You are correct, this is temporary only because there is a new release of Cloudbreak 1.16.5 coming up to the Marketplace in the next few days once the publish review process is finished. Sorry for the inconvenience.
... View more
10-20-2017
08:55 PM
1 Kudo
@Airawat One can burn their custom images for HDP over Cloudbreak with the help of this repository, there is quite extensive documentation there. RHEL images can be burnt with setting the following parameters properly:
AZURE_IMAGE_PUBLISHER (OpenLogic|RedHat) AZURE_IMAGE_OFFER (CentOS|RHEL) AZURE_IMAGE_SKU (6.8|7.2) The new images can be registered into Cloudbreak without the need to publish them into Azure Marketplace. Hope this helps!
... View more
10-17-2017
09:52 AM
@Airawat Your issue seems to be valid but unfortunately currently this is not possible as each availability set can be assigned to at most one hostgroup. I have filed a ticket so this restriction will be removed in a future release.
... View more
10-12-2017
04:32 PM
@Anagha Khanolkar
Feature timing-wise, unfortunately I cannot make any statements so you should contact the sales or solution engineer contact of your customer.
... View more
10-12-2017
12:04 PM
2 Kudos
@Amey Hegde There are the two ways to achieve this as follows: The most cost effective way of doing this is by using Cloudbreak shell to automate the cluster create and tear-down and schedule it with e.g. cron. You can even start the application with a post recipe. The other way is using time based autoscaling and keep the minimum nodes alive. Post recipes will run on upscale. In this case consider that data nodes should not be subject of downscale. Hope this helps!
... View more
10-11-2017
09:12 AM
@Tim Shephard You were right about WebHDFS not exposed, but HDFS UI is exposed via the NameNode service so it should be accessible if you enable "Protected Gateway Access to Cluster Components". If you would like to enable WebHDFS, there is a workaround: SSH to master node of the cluster Edit /srv/pillar/gateway/init.sls Add "WEBHDFS" to gateway:exposed like below: gateway:
address: 172.21.250.198
exposed: [WEBHDFS]
location:
.... After save, run salt '*' state.highstate (this will regenerate Knox topology) Hope this helps!
... View more
10-10-2017
05:06 PM
@Tim Shephard No, I have not written anything related the webhdfs in my answer, but actually both hdfs ui and webhdfs is supported and should work if checked in UI. Hope this helps!
... View more