We have an issue on the latest build CM and CDH5.12.1 Hive server. We can make the Hiveserver2 process crash by running select on a table with row limit 10000. Below rowlimit 3000 it doesnt crash. Below is the message in CM: Sep 13 xx:xx:xx PM Unexpected Exits Bad Show The health test result for HIVESERVER2_UNEXPECTED_EXITS has become bad: This role encountered 1 unexpected exit(s) in the previous 5 minute(s).This included 1 exit(s) due to OutOfMemory errors. Critical threshold: any.
Can you provide some instruction on how to find the issue and fix it? Please provide some detailed instructions to understand where to look in the config files and logs. Thanks!
After increasing the Java Heap Size of HiveServer2 in Bytes to 1 GiB the query is running fine at 10,000 and 20,000 row limit and no Hive process crash so far. It was earlier at 50 MiB and when the query ran it used 250MiB java heap. However we still have some more setting for heap as below that I am not sure if better to bump up or not. I will look for a tuning guide for CDH. I do think the default values should not be set so low by Cloudera as none of the mainstream databases like Oracle, SQL Server would ever crash the server process due to a wayward SQL query just my opinion.
Spark Executor Maximum Java Heap Size spark.executor.memory HiveServer2 Default Group
Spark Driver Maximum Java Heap Size spark.driver.memory HiveServer2 Default Group
Client Java Heap Size in Bytes Gateway Default Group
Java Heap Size of Hive Metastore Server in Bytes Hive Metastore Server Default Group
Java Heap Size of HiveServer2 in Bytes HiveServer2 Default Group
Java Heap Size of WebHCat Server in Bytes WebHCat Server Default Group