01-24-2017 08:08 PM
If one submits a job to a Hadoop cluster without explicitly using Yarn (but for example, using Spark shell, Hive shell, HBase shell, Pig, MapReduce with hadoop command, Impala, via Hue interface, etc.) is the job still scheduled and controlled by Yarn or not? Can one rely that everything goes via Yarn?
In Yarn can I partition my users into several queues with different priorities or different amount of resources allocated to each queue?
01-26-2017 10:58 PM