Member since
02-09-2018
10
Posts
0
Kudos Received
0
Solutions
03-20-2018
02:58 PM
Thank you @Veerendra Nath Jasthi .. While adding queues I missed the property "yarn.scheduler.capacity.root.accessible-node-labels.<newnodelabelname>.capacity" property which was causing the issue.
... View more
03-16-2018
07:56 PM
@Veerendra Nath Jasthi As I am adding queues programmatically, I am hesitant to call API on views ... I am calling refresh queues api as follows, for which I need to manually do the save and refresh. curl -u uname:pwd -H "X-Requested-By:ambari" -iX POST -d '{
"RequestInfo" : {
"command" : "REFRESHQUEUES",
"context" : "Refresh YARN Capacity Scheduler",
"operation_level" : {
"level": "HOST_COMPONENT",
"cluster_name": "<clustername>"
},
"parameters/forceRefreshConfigTags":"capacity-scheduler"
},
"Requests/resource_filters": [{
"service_name" : "YARN",
"component_name" : "RESOURCEMANAGER",
"hosts" : "<hostname>"
}]
}' http://<hostip>:<port>/api/v1/clusters/<clustername>/requests
... View more
03-16-2018
06:16 PM
Hi Is it working for you? I tried the same thing, I am able to see the new queue configuration in Ambari queue manager UI, but need to manually click on "Save and refresh". I see following warning in ambari server log "AbstractResourceProvider:506 - Can not determine request operation level. Operation level property should be specified for this request.
... View more
03-02-2018
06:41 PM
After some research into cloudbreak code base, i found out that, If we decommission hosts from ambari and give downscale in cloudbreak, it is going to delete the decommissioned hosts first. So, in the above scenario, after executing all Ambari APIs, if we use cloudbreak api for downscale, it automatically deletes the hosts we decommissioned. Following is the code from Cloudbreak to find out the instances to downscale. for (InstanceMetaData metaData : instanceMetaDatas) {
if (!metaData.getAmbariServer()
&& (metaData.isDecommissioned() || metaData.isUnRegistered() || metaData.isCreated() || metaData.isFailed())) {
instanceIds.put(metaData.getInstanceId(), metaData.getDiscoveryFQDN());
if (++i >= scalingAdjustment * -1) {
break;
}
}
}
... View more
03-01-2018
10:19 PM
Hi I am trying to delete a specific host (downscale with specific instance) from cluster and terminate the instance. I used Ambari rest API (https://cwiki.apache.org/confluence/display/AMBARI/Using+APIs+to+delete+a+service+or+all+host+components+on+a+host) to decommission the Node Manager Stop All components Delete all components Delete Host and used cloudbreak api to terminate the instance with stackId and instance Id. But it is failing with cause host is in Healthy state. (com.sequenceiq.cloudbreak.service.stack.flow.ScalingFailedException: Host (xxxx) is in HEALTHY state. Cannot be removed.) I am not sure how cloudbreak is updating host status. Is it hitting any Ambari API to know the status? Is there any API available to delete the host from cloudbreak? Thanks
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
02-27-2018
10:21 PM
Thank you @rkovacs just wondering is there any synchronous api available. Right now I am using stack response api (stackresponse.instancegroup.metadata.privateip) before and after and finding the difference in lists to know the new ip addresses.
... View more
02-27-2018
07:33 PM
Hi We are using cloudbreak api to upscale the cluster with stackendpoint.put(stackId, InstanceGroupAdjustmentJson). Is there a way to know the ip addresses of newly created instances? like any other Synchronous API or REST API which returns the details of new ec2 instances created as part of upscaling?
... View more
Labels:
- Labels:
-
Hortonworks Cloudbreak
02-14-2018
05:55 PM
HI ... Thank you for the post. Is there a way to add node labels and queues through java API? We are planning to add node labels and queues on demand based on job submission.
... View more
02-12-2018
03:24 PM
Thank you for your response. I understand that we can use queues. But here I am trying to create Nodes on demand and assigning the labels (exclusive=true) to those nodes, so that my job runs on specified nodes. If I create Queues on demand, I need to change / recalculate capacity for all the existing queues.(Not sure if there an easy way to do this) That is the reason I want to go with the same Queue but with different node labels.
... View more
02-09-2018
07:01 PM
As per this link https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.4/bk_yarn-resource-management/content/using_node_labels.html Node labels are not supported for map reduce jobs. Are there any plans of supporting Node Labels for Map Reduce jobs?
... View more
Labels:
- Labels:
-
Apache Hadoop