Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

How do I configure the FAIR scheduler with Spark-Jobserver?

How do I configure the FAIR scheduler with Spark-Jobserver?

When I post simultaneous jobserver requests, they always seem to be processed in FIFO mode. This is despite my best efforts to enable the FAIR scheduler. How can I ensure that my requests are always processed in parallel?

Background: On my cluster there is one SparkContext to which users can post requests to process data. Each request may act on a different chunk of data but the operations are always the same. A small one-minute job should not have to wait for a large one-hour job to finish.

Intuitively I would expect the following to happen (see my configuration below): The context runs within a FAIR pool. Every time a user sends a request to process some data, Spark should split up the fair pool and give a fraction of the cluster resources to process that new request. Each request is then run in FIFO mode parallel to any other concurrent requests.

Here's what actually happens when I run simultaneous jobs: The interface says "1 Fair Scheduler Pools" and it lists one active (FIFO) pool named "default." It seems that everything is executing within the same FIFO pool, which itself is running alone within the FAIR pool. I can see that my fair pool details are loaded correctly on Spark's Environment page, but my requests are all processed in FIFO fashion.

How do I configure my environment/application so that every request actually runs in parallel to others? Do I need to create a separate context for each request? Do I create an arbitrary number of identical FIFO pools within my FAIR pool and then somehow pick an empty pool every time a request is made? Considering the objectives of Jobserver, it seems like this should all be automatic and not very complicated to set up.

3 REPLIES 3
Highlighted

Re: How do I configure the FAIR scheduler with Spark-Jobserver?

@kishore sanchina

Jobs run FIFO within a pool. The cluster is divided up across the pools. If you have one pool, only one job will execute at a time. If you want more than one (interactive vs batch) you will need to set up separate pools:

https://spark.apache.org/docs/latest/job-scheduling.html

Default Behavior of Pools

By default, each pool gets an equal share of the cluster (also equal in share to each job in the default pool), but inside each pool, jobs run in FIFO order. For example, if you create one pool per user, this means that each user will get an equal share of the cluster, and that each user’s queries will run in order instead of later queries taking resources from that user’s earlier ones.

Highlighted

Re: How do I configure the FAIR scheduler with Spark-Jobserver?

Explorer

Spark’s scheduler runs jobs in FIFO fashion.

It is also possible to configure fair sharing between jobs.

To enable the fair scheduler, simply set the spark.scheduler.mode property to FAIR when configuring a SparkContext:

> val conf = new SparkConf().setMaster(...).setAppName(...)
> conf.set("spark.scheduler.mode", "FAIR") val sc = new 
> SparkContext(conf)
Highlighted

Re: How do I configure the FAIR scheduler with Spark-Jobserver?

Expert Contributor

You may have to edit your spark-defaults.conf on the job server machine to specify fair scheduling.

Btw, you may want to try out Livy in HDP 2.5. Its a REST API server to submit batch and interactive Spark jobs. Its under active development by major vendors. In HDP 2.5 you can enable Livy via Ambari in the Spark service. Direct REST access is not yet supported in HDP 2.5 but you can try it out to see if it meets your needs. In HDP 2.5 only Zeppelin via Livy is officially supported.

Don't have an account?
Coming from Hortonworks? Activate your account here