Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Job cannot share most of the non-exclusive nodelabel

Highlighted

Job cannot share most of the non-exclusive nodelabel

Explorer

usecase:

env: hdp 2.3.4.7 there are 5 nodes. I want to use 2 high spec machine to run spark job, and all nodes can run mapreduce job.

My design is use a nodelabel with non-exclusive: yarn rmadmin -addToClusterNodeLabels "high(exclusive=false)" yarn rmadmin -replaceLabelsOnNode "node4=high node5=high"

And set two queues, one called mr, and another one called spark.

When I submit job to mr, and high nodelabel nodes are idle, Can my job use all the rescoure?

My test result is something different. When I run a heavy job, the 3 nodes were used fast and the two high labeled nodes, just started one or two container(1%), I confirmed the capacity-scheduler, it can use more than 50%.

I am not sure how to config the shareable function now...

2 REPLIES 2

Re: Job cannot share most of the non-exclusive nodelabel

Super Guru

Re: Job cannot share most of the non-exclusive nodelabel

Explorer

@Kuldeep Kulkarni, thank you very much, but I have seen this document for more 10 times, I have done all the same setting with it, but...