Created 03-15-2021 05:48 AM
I have a use case that if a particular service is down on a node, no job should get scheduled there anymore. so for hive or spark, I can use the concept of external health script present in Hadoop. Which periodically runs the script and if service is down that script will mark node unhealthy and job won't get scheduled there anymore. But this uses Yarn. Impala doesn't use yarn. I tried finding the alternative for impala, but I couldn't find anything like a custom script. What could be the possible ways to tackle this scenario? Does impala have a custom health check script?
Reference: https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarn-site/NodeManager.html
Created 03-15-2021 12:00 PM
Think about it Impala uses HMS so remember that the Hive metastore database is required for Impala to function.
So if HMS is not running then no Impala query/job should be launched.
Hope that helps
Created 03-16-2021 01:10 AM
Hi Shelton, thanks for the reply.
I'll try to reframe my question here a bit. I want to decommission nodes from certain worker nodes based on a criteria and not disturb the service as a whole. And should be able to recommiossion on those nodes again.
FYI,
The external healthcheck script which I've used in case of yarn-based services(e.g. hive) does not stop hive metastore. The external healthcheck script is distributed across nodes. Which is executed by yarn periodically and when it fails on certain nodes, that node is marked as unhealthy. and job no more gets scheduled there and if after a period of time that node becomes healthy job can be scheduled there.
I've added this example so that you can relate better with the use case in question.
Created 03-23-2021 09:23 PM
Edit: I've learned that decommissioning nodes might not be the preferred way. otherwise, the problem statement remains the same. i.e. The job should not get scheduled on nodes where a particular service is down.