28805
DISCUSSIONS
102200
MEMBERS
3161
ARTICLES
Created 11-15-2016 07:08 AM
Hello,
Recently I installed a compute node only with bigger cpu and memory in hopes that it could run the Hive on Spark query faster. I encountered a weird problem.
If using regular hive and mapreduce all nodemanagers (datanode + computenode ) resources are used.
But if I use hive on spark, the computenode only used for 1-2 minutes then it will never being used anymore until the query finished.
Is there any specific config need to be done to use "computenode" with hive on spark?
I'm using CDH 5.8.0-1.cdh5.8.0.p0.42