Member since
07-25-2019
184
Posts
42
Kudos Received
39
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2256 | 03-25-2019 02:25 PM | |
1133 | 01-22-2019 02:37 PM | |
1272 | 01-16-2019 04:21 PM | |
2688 | 10-17-2018 12:22 PM | |
1342 | 08-28-2018 08:31 AM |
01-19-2018
02:43 PM
@Paramesh malla You can use an Azure Shared Access Signature to restrict the access to your VHD based on time or access policy. You can generate the URL with Azure Storage Explorer like documented here and replace the original in arm-images.yml.
... View more
12-29-2017
10:04 AM
@Cibi Chakaravarthi If your original question was answered, would you please consider accepting the answer? Thanks!
... View more
12-15-2017
01:27 PM
@Airawat If you consider your original question answered, would you please consider accepting the answer? Thanks!
... View more
12-12-2017
02:11 PM
@Abhishek Sakhuja Do you have any updates on this one, have you managed to get this working? If you consider your original question answered, would you pls. consider accepting the answer?
... View more
12-12-2017
01:53 PM
@Airawat Have you had the chance to have a look at this one? Thanks!
... View more
12-12-2017
01:52 PM
@Cibi Chakaravarthi I suggest you to try with the new 1.16.5 version as it has this part of code refactored. The update should not affect your running clusters and it can be run with one command "cbd update". Hope this helps!
... View more
12-06-2017
06:22 PM
@Cibi Chakaravarthi I suppose you are using managed disks, if so then this is a known issue which got fixed in Cloudbreak release 1.16.5. You can try to update following the documentation or you might try to launch Cloudbreak 1.16.5 from Azure Marketplace. Sorry for the inconvenience & I hope this helps!
... View more
12-06-2017
01:34 PM
@Airawat I've just double-checked the new Cloudbreak 1.16.5 version in Marketplace and it seems to have the correct version: Could You please verify? Thanks!
... View more
12-06-2017
01:00 PM
@Cibi Chakaravarthi There should be some useful information in the logs (there is no sensitive data in it), so please attach it to the case to be able to investigate.
... View more
12-06-2017
10:49 AM
@Abhishek Sakhuja The error message is an indication of your HDFS running out of space. The amount of free space is fetched from Ambari and calculated like the following: def Map<String, Map<Long, Long>> getDFSSpace() {
def result = [:]
def response = utils.slurp("clusters/${getClusterName()}/services/HDFS/components/NAMENODE", 'metrics/dfs')
log.info("Returned metrics/dfs: {}", response)
def liveNodes = slurper.parseText(response?.metrics?.dfs?.namenode?.LiveNodes as String)
if (liveNodes) {
liveNodes.each {
if (it.value.adminState == 'In Service') {
result << [(it.key.split(':')[0]): [(it.value.remaining as Long): it.value.usedSpace as Long]]
}
}
}
result
} Please check Ambari UI, it can be that Ambari calculates the free space incorrectly. Hope this helps!
... View more