Created on 09-14-2016 11:28 PM - edited on 07-14-2020 03:06 AM by VidyaSargur
In this article, we cover the following:
Assumptions:
Tested with HDP 2.4.0
I'm using configuration suggested in best-practices and docs related to security and hive performance guide, which include following configurations:
Understanding #1 - default queues + number of sessions:
When you define hive default queues (hive.server2.tez.default.queues), hive number of sessions (hive.server2.tez.sessions.per.default.queue) and hive start sessions (hive.server2.tez.initialize.default.sessions) = true, hiveserver2 will create one Tez AM (application master) for each default queue x number of sessions when hiveserver2 service starts.
For example, if you define default queues = "queue1, queue2" and number of sessions = 2, hiveserver2 will create 4 Tez AM (2 for queue1 and 2 for queue2).
If you have continuous usage of hive server2, those Tez AM will keep running, but if your hiveserver2 is idle, those Tez AM will be killed based on timeout defined by tez.session.am.dag.submit.timeout.secs.
Understanding #2 - pre-warm containers
The pre-warm container is different from session initialization.
The number of containers is related to the amount of YARN execution containers (not considering AM container) that will be attached to each Tez AM by default. This same number of containers will be held by each AM, even when Tez AM is idle (not executing queries).
Understanding #3 - when initialized Tez AM (AM pool) is used
A query WILL ONLY use a Tez AM from the pool (initialized as described above) if you DO NOT specify queue name (tez.queue.name), in this case, Hiveserver2 will pick one of Tez AM idle/available (queue name here may be randomly selected). If you execute multiple queries using the same JDBC/ODBC connection, each query may be executed by different Tez AM and it can also be executed in different queue names.
If you specify queue name in your JDBC/OBDBC connection, hiveserver2 will create a new Tez AM for that connection and won't use any of the initialized sessions.
PS: only JDBC/ODBC clients and hive CLI (command line) can use initialized Tez AM.
Understanding #4 - what if you have more concurrent queries than the number of Tez AM initialized?
If you DO NOT specify queue name (as described above) hiveserver2 will hold your query until one of the default Tez AM (pool) is available to serve your query. There won't be any message in JDBC/OBDC client neither in the hiveserver2 log file. An uninformed user (or admin) may think JDBC/ODBC connection or hiveserver2 is broken, but it's just waiting for a Tez AM to execute the query.
If you DO specify queue name, it doesn't matter how many initialized Tez AM are in use or idle, hiveserver2 will create a new Tez AM for this connection and the query can be executed (if the queue has available resources).
Understanding #5 - how to make a custom queue name (defined by the user) to use AM pool
There is no way to allow users to specify queue names and at the same time use Tez AM pool (initialized sessions). If your use case requires different/dedicated Tez AM pool for each group of users, you need dedicated hiveserver2 services, each of them with respective default queue name and number of sessions, and ask each group of users to use respective hiveserver2.
Understanding #6 - number of containers for a single query
A number of containers each query will use are defined here ( https://cwiki.apache.org/confluence/display/TEZ/How+initial+task+parallelism+works), which consider a number of resources available on the current queue, the number of resources available in a queue is defined by the minimum guaranteed capacity (yarn.scheduler.capacity.root._queuename_.capacity) and not maximum capacity (yarn.scheduler.capacity.root._queuename_.maximum-capacity).
In other words, if you are executing a heavy query, this query will probably use all the resources available in the queue, leaving no room for other queries to execute.
Understanding #7 - capacity scheduler ordering policy (FIFO or FAIR) and preemption
There is no preemption inside queues and FAIR ordering policy is only effective when containers are released by AM. If you are reusing containers (tez.am.container.reuse.enabled=true) as recommended, containers will only be released by AM when query finishes, if the first query being executed is heavy, all subsequent queries will wait for the first query to finish to receive the "fair" share.
Understanding #8 - a scenario where it all works together
My understanding is that all these pieces and settings were initially designed and very well optimized to work in the following scenario:
Small/Medium cluster most common needs
Although the scenario described in item #8 is very interesting (and hive+tez is the best solution to run at that scale), it is not the reality in most of the small-medium Hadoop clusters, usually found in Hadoop adoption phase and POCs.
In my experience, what most of the users of such environments need is much simpler, something like this:
Proposed solution / settings
Hive+Tez is the perfect solution for both scenarios described above, but now, I'm going to share details on how to adjust setttings for the "Small/Medium cluster most common needs":
yarn.resourcemanager.scheduler.monitor.enable=trueyarn.resourcemanager.monitor.capacity.preemption.max_ignored_over_capacity=0.01
yarn.resourcemanager.monitor.capacity.preemption.max_wait_before_kill=1000
yarn.resourcemanager.monitor.capacity.preemption.monitoring_interval=1000
yarn.resourcemanager.monitor.capacity.preemption.natural_termination_factor=1
yarn.resourcemanager.monitor.capacity.preemption.total_preemption_per_round=1
yarn.resourcemanager.scheduler.monitor.policies=org.apache.hadoop.yarn.server.resourcemanager.monitor.capacity.ProportionalCapacityPreemptionPolicy
yarn.scheduler.capacity.maximum-am-resource-percent=0.2
yarn.scheduler.capacity.maximum-applications=10000
yarn.scheduler.capacity.node-locality-delay=40
yarn.scheduler.capacity.queue-mappings-override.enable=false
yarn.scheduler.capacity.root.accessible-node-labels=*
yarn.scheduler.capacity.root.acl_administer_queue=*
yarn.scheduler.capacity.root.capacity=100
yarn.scheduler.capacity.root.default.acl_submit_applications=*
yarn.scheduler.capacity.root.default.capacity=30
yarn.scheduler.capacity.root.default.maximum-capacity=100
yarn.scheduler.capacity.root.default.state=RUNNING
yarn.scheduler.capacity.root.default.user-limit-factor=10
yarn.scheduler.capacity.root.hive-test.acl_administer_queue=*
yarn.scheduler.capacity.root.hive-test.acl_submit_applications=*
yarn.scheduler.capacity.root.hive-test.capacity=70
yarn.scheduler.capacity.root.hive-test.maximum-capacity=70
yarn.scheduler.capacity.root.hive-test.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.hive-test.ordering-policy=fifo
yarn.scheduler.capacity.root.hive-test.production.acl_administer_queue=*
yarn.scheduler.capacity.root.hive-test.production.acl_submit_applications=*
yarn.scheduler.capacity.root.hive-test.production.capacity=40
yarn.scheduler.capacity.root.hive-test.production.maximum-capacity=100
yarn.scheduler.capacity.root.hive-test.production.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.hive-test.production.ordering-policy=fifo
yarn.scheduler.capacity.root.hive-test.production.state=RUNNING
yarn.scheduler.capacity.root.hive-test.production.user-limit-factor=100
yarn.scheduler.capacity.root.hive-test.queues=production,user
yarn.scheduler.capacity.root.hive-test.state=RUNNING
yarn.scheduler.capacity.root.hive-test.user-limit-factor=10
yarn.scheduler.capacity.root.hive-test.user.acl_administer_queue=*
yarn.scheduler.capacity.root.hive-test.user.acl_submit_applications=*
yarn.scheduler.capacity.root.hive-test.user.capacity=60
yarn.scheduler.capacity.root.hive-test.user.maximum-capacity=100
yarn.scheduler.capacity.root.hive-test.user.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.hive-test.user.ordering-policy=fifo
yarn.scheduler.capacity.root.hive-test.user.queues=user1,user2,user3
yarn.scheduler.capacity.root.hive-test.user.state=RUNNING
yarn.scheduler.capacity.root.hive-test.user.user-limit-factor=100
yarn.scheduler.capacity.root.hive-test.user.user1.acl_administer_queue=*
yarn.scheduler.capacity.root.hive-test.user.user1.acl_submit_applications=*
yarn.scheduler.capacity.root.hive-test.user.user1.capacity=33
yarn.scheduler.capacity.root.hive-test.user.user1.maximum-capacity=100
yarn.scheduler.capacity.root.hive-test.user.user1.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.hive-test.user.user1.ordering-policy=fifo
yarn.scheduler.capacity.root.hive-test.user.user1.state=RUNNING
yarn.scheduler.capacity.root.hive-test.user.user1.user-limit-factor=10
yarn.scheduler.capacity.root.hive-test.user.user2.acl_administer_queue=*
yarn.scheduler.capacity.root.hive-test.user.user2.acl_submit_applications=*
yarn.scheduler.capacity.root.hive-test.user.user2.capacity=33
yarn.scheduler.capacity.root.hive-test.user.user2.maximum-capacity=100
yarn.scheduler.capacity.root.hive-test.user.user2.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.hive-test.user.user2.ordering-policy=fifo
yarn.scheduler.capacity.root.hive-test.user.user2.state=RUNNING
yarn.scheduler.capacity.root.hive-test.user.user2.user-limit-factor=10
yarn.scheduler.capacity.root.hive-test.user.user3.acl_administer_queue=*
yarn.scheduler.capacity.root.hive-test.user.user3.acl_submit_applications=*
yarn.scheduler.capacity.root.hive-test.user.user3.capacity=34
yarn.scheduler.capacity.root.hive-test.user.user3.maximum-capacity=100
yarn.scheduler.capacity.root.hive-test.user.user3.minimum-user-limit-percent=100
yarn.scheduler.capacity.root.hive-test.user.user3.ordering-policy=fifo
yarn.scheduler.capacity.root.hive-test.user.user3.state=RUNNING
yarn.scheduler.capacity.root.hive-test.user.user3.user-limit-factor=10
yarn.scheduler.capacity.root.queues=default,hive-test
hive.server2.tez.default.queues=user1,user2,user3
set hive.server2.tez.sessions.per.default.queue=1
set hive.server2.tez.initialize.default.sessions=true
set hive.prewarm.enabled=true
set hive.prewarm.numcontainers=2
set tez.am.container.reuse.enabled=true
Finally, it can also be useful to increase release timeout for idle containers on Tez settings, those will make all the containers to keep running for a short period after query finishes (not only the prewarm containers but also extra containers launched by AM), making subsequent queries initialization time faster. A suggestion would be:
tez.am.container.idle.release-timeout-min.millis=30000
tez.am.container.idle.release-timeout-max.millis=90000
PS: If you have multiple business units, each of them with different resource sharing/priorities, you need dedicated hiveserver2 for each business unit. For example: one hiveserver2 for 'marketing', one hiveserver2 for 'fraud', one hiveserver2 for 'financial'. As described above, hiveserver2 will only use initialized sessions if you don't specify tez.queue.name.
Created on 10-03-2016 05:59 AM
Great article
"PS: only jdbc/odbc clients can use initialized Tez AM, hive cli (command line) or other external hive components don't use initialized Tez AM."
Beeline is also able to use initialized containers. Hive cli can't as it is bypassing hs2.
Created on 07-05-2018 12:10 PM
Thanks for sharing this info. Nice Post. I could see above videos are not available. Would request to share a correct URL
Created on 07-05-2018 12:12 PM
@Guilherme Braccialli Would request to share a correct URL for above demos.
Created on 09-05-2018 09:40 PM
Thank you for sharing info,
I could not see videos, it would be great if you can share URLs for videos.
Created on 07-09-2020 03:00 AM
I'm on a HDP version 3.01 and cannot find a lot of the parameters you described. i', using the ambari-interface.
still video are not visible
Created on 07-15-2020 11:12 PM - edited 07-15-2020 11:13 PM
We have updated this article to remove links to videos that are no longer available.