Member since
11-06-2019
4
Posts
0
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1078 | 08-18-2023 05:01 AM | |
3896 | 11-08-2019 06:17 AM |
08-18-2023
05:01 AM
This was caused by me overlooking "root" as an actual queue and not giving it the proper permissions for label and capacity to pass on to the child queues. The configuration in the writeup here tipped me off: https://www.ibm.com/support/pages/yarn-node-labels-label-based-scheduling-and-resource-isolation-hadoop-dev Here is the full configuration that gives me the desired behaviour: <configuration>
<property>
<name>yarn.scheduler.capacity.maximum-am-resource-percent</name>
<value>1.0</value>
</property>
<property>
<name>yarn.scheduler.capacity.resource-calculator</name>
<value>org.apache.hadoop.yarn.util.resource.DefaultResourceCalculator</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.accessible-node-labels</name>
<value>*</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.capacity</name>
<value>100</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.maximum-capacity</name>
<value>100</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.accessible-node-labels.node.capacity</name>
<value>100</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.accessible-node-labels.node.maximum-capacity</name>
<value>100</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.queues</name>
<value>default,spark</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.capacity</name>
<value>[memory=11776,vcores=4]</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.maximum-capacity</name>
<value>[memory=11776,vcores=4]</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.accessible-node-labels</name>
<value>node</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.default-node-label-expression</name>
<value>node</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.accessible-node-labels.node.capacity</name>
<value>[memory=11776,vcores=4]</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.accessible-node-labels.node.maximum-capacity</name>
<value>[memory=11776,vcores=4]</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.default-application-priority</name>
<value>9</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.default.disable_preemption</name>
<value>true</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.spark.capacity</name>
<value>[memory=4096,vcores=1]</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.spark.maximum-capacity</name>
<value>[memory=4096,vcores=1]</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.spark.accessible-node-labels</name>
<value>node</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.spark.accessible-node-labels.node.capacity</name>
<value>[memory=4096,vcores=1]</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.spark.accessible-node-labels.node.maximum-capacity</name>
<value>[memory=4096,vcores=1]</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.spark.default-application-priority</name>
<value>9</value>
</property>
<property>
<name>yarn.scheduler.capacity.root.spark.disable_preemption</name>
<value>true</value>
</property>
</configuration>
... View more
11-08-2019
06:17 AM
This problem is caused by "mapreduce.framework.name=local" (default in Hadoop 3.2.1). Solved with "set mapreduce.framework.name=yarn".
... View more