Community Articles
Find and share helpful community-sourced technical articles
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.
Labels (3)

In this article:

- Demonstrate how hive sessions work

- Clarify some misunderstanding about hive sessions behavior (most of people, including myself until recently, believe hive sessions and queue allocation work different than the way they actual work)

- Propose a solution for small-medium environments (up to 200 nodes) that allows concurrent users + multi-tenancy + soft allocation + preemption

Assumptions:

Tested with HDP 2.4.0

I'm using configuration suggested in best-practices and docs related to security and hive performance guide (http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_performance_tuning/content/ch_hive_archit...), which include following configurations:

- hive engine (hive.execution.engine) = tez

- hive do-as (hive.server2.enable.doAs) = false

- hive default queues (hive.server2.tez.default.queues) = (queue-name1,queue-name2,etc)

- hive number of sessions (hive.server2.tez.sessions.per.default.queue) = 1 (or up to 4)

- hive start sessions (hive.server2.tez.initialize.default.sessions) = true

- hive pre-warm containers (hive.prewarm.enabled) = true

- hive num-containers (hive.prewarm.numcontainers) = (some number >1)

- tez container reuse (tez.am.container.reuse.enabled) = true

- all examples here related to hiveserver2 + jdbc client (beeline/other jdbc client or odbc driver), it does not apply with hive cli or other tools that interact directly with hivemetastore

Understanding #1 - default queues + number of sessions:

When you define hive default queues (hive.server2.tez.default.queues), hive number of sessions (hive.server2.tez.sessions.per.default.queue) and hive start sessions (hive.server2.tez.initialize.default.sessions) = true, hiveserver2 will create one Tez AM (application master) for each default queue x number of sessions when hiveserver2 service starts.

For example, if you define default queues = "queue1, queue2" and number of sessions = 2, hiveserver2 will create 4 Tez AM (2 for queue1 and 2 for queue2).

If you have continuous usage of hive server2, those Tez AM will keep running, but if your hiveserver2 is idle, those Tez AM will be killed based on timeout defined by tez.session.am.dag.submit.timeout.secs.

See animation #1 below and watch sessions being initialized after hiveserver2 start:

Understanding #2 - pre-warm containers

Pre-warm container is different from session initialization.

The number containers is related to the amount of YARN execution containers (not considering AM container) that will be attached to each Tez AM by default. This same number of containers will be held by each AM, even when Tez AM is idle (not executing queries).

Check again animation #1 again (above), this time pay attention on number of containers in each AM started, also notice the number of containers will always be the number you defined on "Number of Containers Held" + 1 (the extra container runs the AM - Application Master).

Understanding #3 - when initialized Tez AM (lets call AM pool) is used

A query WILL ONLY use a Tez AM from the pool (initialized as described above) if you DO NOT specify queue name (tez.queue.name), in this case Hiveserver2 will pick one of Tez AM idle/available (queue name here may be randomly selected). If you execute multiple queries using same JDBC/ODBC connection, each query may be executed by different Tez AM and it can also be executed in different queue name.

If you specify queue name in your JDBC/OBDBC connection, hiveserver2 will create a new Tez AM for that connection and WONT use any of initialized session.

PS: only jdbc/odbc clients can use initialized Tez AM, hive cli (command line) or other external hive components don't use initialized Tez AM.

See animation #2 below and observe query is using one of the AM from the pool (queue.name was not set):

See animation #3 below and observe query is NOT using one of the AM from the pool, it's is starting a new AM (queue.name was set):

Understanding #4 - what if you have more concurrent queries than number of Tez AM initialized?

If you DO NOT specify queue name (as describe above) hiveserver2 will hold your query until one of the default Tez AM (pool) is available to serve your query. There wont be any message in JDBC/OBDC client neither in hiveserver2 log file. An uninformed user (or admin) may think JDBC/ODBC connection or hiveserver2 is broken, but it's just waiting for a Tez AM to execute the query.

if you DO specify queue name, it doesn't matter how many initialized Tez AM are in use or idle, hiveserver2 will create a new Tez AM for this connection and the query can be executed (if the queue has available resources).

See animation #4 below and observe a query in hold due to lack of available Tez AM to execute query:

Understanding #5 - how to make custom queue name (defined by user) to use AM pool

There is no way to allow users to specify queue name and at the same time use Tez AM pool (initialized sessions). If your use case requires different/dedicated Tez AM pool for each group of users, you need dedicated hiveserver2 services, each of them with respective default queue name and number of sessions and ask each group of users to use respective hiveserver2.

Understanding #6 - number of containers for a single query

Number of containers each query will use is defined here ( https://cwiki.apache.org/confluence/display/TEZ/How+initial+task+parallelism+works), which consider number of resource available on current queue, the number of resource available in a queue is defined by the minimum guaranteed capacity (yarn.scheduler.capacity.root._queuename_.capacity) and not maximum capacity (yarn.scheduler.capacity.root._queuename_.maximum-capacity).

In other words, if you are executing a heavy query, this query will probably use all the resources available on the queue, leaving no room for other queries to execute.

See animation #5 below and observe first query using all resources from queue and following queries suffering resource starvation:

Understanding #7 - capacity scheduler ordering policy (FIFO or FAIR) and preemption

There is no preemption inside queues and FAIR ordering policy are only effective when containers are released by AM. If you are reusing containers (tez.am.container.reuse.enabled=true) as recommended, containers will only be released by AM when query finishes, if first query being executed is heavy, all subsequent queries will wait for first query to finish to receive the "fair" share.

See animation #5 again and notice first query is using all resources even with FAIR ordering policy.

Understanding #8 - scenario where it all work together

My understanding is that all these pieces and settings were initially designed and very well optimized to work in the following scenario:

1- Large Cluster with lots of resources dedicated to queries

2- High number of concurrent queries/users

3- Multiple hiveserver2 instances, for example, one for light queries, other for heavy queries

4- Fixed number of sessions (Tez AM) and fixed number of containers for each session, all of them pre-warmed (these numbers can be different for each type of query / hiveserver2, as described in #3).

5- No need for FAIR ordering policy or preemption because all the queries always use only the pre-warmed containers

Small/Medium cluster most common needs

Although scenario described in item #8 is very interesting (and hive+tez is the best solution to run at that scale), it is not the reality in most of small-medium hadoop clusters, usually found in hadoop adoption phase and POCs.

In my experience, what most of the users of such environments need is much simpler, something like this:

1- Multi-tenancy cluster, lets say Production and User layers sharing resources

2- Soft limits and preemption (at night, when users are off, give 100% of resources to production, during the day keep production running with minimum allocation to accomplish SLA, but give most of the power to users)

3- Fair allocation (if just one user is running a query, give him all the resources, as soon as second, third and subsequent users start executing queries, give them equal amount of resources distribution, but also respect different queues minimum shares, that represents priorities)

Proposed solution / settings

Hive+Tez is the perfect solution for both scenarios described above, but now, I'm going to share details on how to adjust setttings for the "Small/Medium cluster most common needs":

1- YARN - Enable preemption as described here ( http://hortonworks.com/blog/better-slas-via-resource-preemption-in-yarns-capacityscheduler/):

my settings for YARN are:

yarn.resourcemanager.scheduler.monitor.enable=trueyarn.resourcemanager.monitor.capacity.preemption.max_ignored_over_capacity=0.01
yarn.resourcemanager.monitor.capacity.preemption.max_wait_before_kill=1000
yarn.resourcemanager.monitor.capacity.preemption.monitoring_interval=1000
yarn.resourcemanager.monitor.capacity.preemption.natural_termination_factor=1
yarn.resourcemanager.monitor.capacity.preemption.total_preemption_per_round=1
yarn.resourcemanager.scheduler.monitor.policies=org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy

2- Capcity Scheduler - Create one queue for each concurrent session, for example, if you want 3 concurrent users, create 3 queues (this is needed to achieve full preemption):

yarn.scheduler.capacity.maximum-am-resource-percent=0.2
yarn.scheduler.capacity.maximum-applications=10000
yarn.scheduler.capacity.node-locality-delay=40
yarn.scheduler.capacity.queue-mappings-override.enable=false
yarn.scheduler.capacity.root.accessible-node-labels=*
yarn.scheduler.capacity.root.acl_administer_queue=*
yarn.scheduler.capacity.root.capacity=100
yarn.scheduler.capacity.root.default.acl_submit_applications=*
yarn.scheduler.capacity.root.default.capacity=30
yarn.scheduler.capacity.root.default.maximum-capacity=100
yarn.scheduler.capacity.root.default.state=RUNNING
yarn.scheduler.capacity.root.default.user-limit-factor=10
yarn.scheduler.capacity.root.hive-test.acl_administer_queue=*
yarn.scheduler.capacity.root.hive-test.acl_submit_applications=*
yarn.scheduler.capacity.root.hive-test.capacity=70
yarn.scheduler.capacity.root.hive-test.maximum-capacity=70
yarn.scheduler.capacity.root.hive-test.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.hive-test.ordering-policy=fifo
yarn.scheduler.capacity.root.hive-test.production.acl_administer_queue=*
yarn.scheduler.capacity.root.hive-test.production.acl_submit_applications=*
yarn.scheduler.capacity.root.hive-test.production.capacity=40
yarn.scheduler.capacity.root.hive-test.production.maximum-capacity=100
yarn.scheduler.capacity.root.hive-test.production.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.hive-test.production.ordering-policy=fifo
yarn.scheduler.capacity.root.hive-test.production.state=RUNNING
yarn.scheduler.capacity.root.hive-test.production.user-limit-factor=100
yarn.scheduler.capacity.root.hive-test.queues=production,user
yarn.scheduler.capacity.root.hive-test.state=RUNNING
yarn.scheduler.capacity.root.hive-test.user-limit-factor=10
yarn.scheduler.capacity.root.hive-test.user.acl_administer_queue=*
yarn.scheduler.capacity.root.hive-test.user.acl_submit_applications=*
yarn.scheduler.capacity.root.hive-test.user.capacity=60
yarn.scheduler.capacity.root.hive-test.user.maximum-capacity=100
yarn.scheduler.capacity.root.hive-test.user.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.hive-test.user.ordering-policy=fifo
yarn.scheduler.capacity.root.hive-test.user.queues=user1,user2,user3
yarn.scheduler.capacity.root.hive-test.user.state=RUNNING
yarn.scheduler.capacity.root.hive-test.user.user-limit-factor=100
yarn.scheduler.capacity.root.hive-test.user.user1.acl_administer_queue=*
yarn.scheduler.capacity.root.hive-test.user.user1.acl_submit_applications=*
yarn.scheduler.capacity.root.hive-test.user.user1.capacity=33
yarn.scheduler.capacity.root.hive-test.user.user1.maximum-capacity=100
yarn.scheduler.capacity.root.hive-test.user.user1.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.hive-test.user.user1.ordering-policy=fifo
yarn.scheduler.capacity.root.hive-test.user.user1.state=RUNNING
yarn.scheduler.capacity.root.hive-test.user.user1.user-limit-factor=10
yarn.scheduler.capacity.root.hive-test.user.user2.acl_administer_queue=*
yarn.scheduler.capacity.root.hive-test.user.user2.acl_submit_applications=*
yarn.scheduler.capacity.root.hive-test.user.user2.capacity=33
yarn.scheduler.capacity.root.hive-test.user.user2.maximum-capacity=100
yarn.scheduler.capacity.root.hive-test.user.user2.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.hive-test.user.user2.ordering-policy=fifo
yarn.scheduler.capacity.root.hive-test.user.user2.state=RUNNING
yarn.scheduler.capacity.root.hive-test.user.user2.user-limit-factor=10
yarn.scheduler.capacity.root.hive-test.user.user3.acl_administer_queue=*
yarn.scheduler.capacity.root.hive-test.user.user3.acl_submit_applications=*
yarn.scheduler.capacity.root.hive-test.user.user3.capacity=34
yarn.scheduler.capacity.root.hive-test.user.user3.maximum-capacity=100
yarn.scheduler.capacity.root.hive-test.user.user3.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.hive-test.user.user3.ordering-policy=fifo
yarn.scheduler.capacity.root.hive-test.user.user3.state=RUNNING
yarn.scheduler.capacity.root.hive-test.user.user3.user-limit-factor=10
yarn.scheduler.capacity.root.queues=default,hive-test

3- Hive/Tez - Define default queues and initialize 1 session per queue:

hive.server2.tez.default.queues=user1,user2,user3
set hive.server2.tez.sessions.per.default.queue=1
set hive.server2.tez.initialize.default.sessions=true
set hive.prewarm.enabled=true
set hive.prewarm.numcontainers=2
set tez.am.container.reuse.enabled=true

Proposed solution - settings demo:

Proposed solution - live demo:

Finally, it can also be useful to increase release timeout for idle containers on Tez settings, those will make all the containers to keep running for a short period after query finishes (not only the prewarm containers, but also extra containers lauched by AM), making subsequent queries initilization time faster. A suggestion would be:

tez.am.container.idle.release-timeout-min.millis=30000

tez.am.container.idle.release-timeout-max.millis=90000

PS: If you have multiple business unit, each of them with different resource sharing/priorities, you need dedicated hiveserver2 for each business unit. For example: one hiveserver2 for 'marketing', one hiveserver2 for 'fraud', one hiveserver2 for 'financial'. As described above, hiveserver2 will only use initialized sessions if you don't specify tez.queue.name.

I'll write child articles showing how to setup multiple hiveserver2 and how to restrict access to hiveserver2 instances only to users authorized each hiveserver2 instance (using custom hook). When it's done, I'll publish link here.

23,413 Views
Comments

Great article

"PS: only jdbc/odbc clients can use initialized Tez AM, hive cli (command line) or other external hive components don't use initialized Tez AM."

Beeline is also able to use initialized containers. Hive cli can't as it is bypassing hs2.

Explorer

Thanks for sharing this info. Nice Post. I could see above videos are not available. Would request to share a correct URL

Explorer

@Guilherme Braccialli Would request to share a correct URL for above demos.

Cloudera Employee

Thank you for sharing info,

I could not see videos, it would be great if you can share URLs for videos.

Don't have an account?
Coming from Hortonworks? Activate your account here
Version history
Revision #:
1 of 1
Last update:
‎09-14-2016 11:28 PM
Updated by:
 
Contributors
Top Kudoed Authors