Support Questions

Find answers, ask questions, and share your expertise

can we configure hive jobs not to run where spark is installed?

avatar
Rising Star

How can we configure cluster to have spark separated from other echo system components

1 ACCEPTED SOLUTION

avatar
Master Mentor

You can logically segregate a cluster using Yarn node labels. http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_yarn_resource_mgt/content/ch_node_labels....

You can also choose different queues for Spark and Hive. It won't necessarily prevent tasks running on same nodes but at least they won't compete for resources.

http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_performance_tuning/content/hive_perf_best...

And http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_yarn_resource_mgt/content/managing_cluste...

View solution in original post

4 REPLIES 4

avatar
Master Mentor

You can logically segregate a cluster using Yarn node labels. http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_yarn_resource_mgt/content/ch_node_labels....

You can also choose different queues for Spark and Hive. It won't necessarily prevent tasks running on same nodes but at least they won't compete for resources.

http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_performance_tuning/content/hive_perf_best...

And http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.4.0/bk_yarn_resource_mgt/content/managing_cluste...

avatar
Rising Star

Thanks, that helps.

At the same time, can you point me to Setting Up Time-Based Queue Capacity Change.

avatar
Guru

Its good it you separate this as a new question. Right now there is no support for time-based queue capacity change.

However, we were able to run a cron based job that refreshes queues with manual changes to capacity scheduler. However, if you do this and someone either restarts RMs and/or refreshes queues from ambari, your cron based changes will be overwritten.

avatar
Master Mentor

@kjilla if this is a satisfactory answer, please accept the answer.