Created on 09-14-2017 10:33 AM - edited 09-16-2022 05:14 AM
We have an issue on the latest build CM and CDH5.12.1 Hive server. We can make the Hiveserver2 process crash by running select on a table with row limit 10000. Below rowlimit 3000 it doesnt crash. Below is the message in CM:
Sep 13 xx:xx:xx PM
Unexpected Exits Bad
Show
The health test result for HIVESERVER2_UNEXPECTED_EXITS has become bad: This role encountered 1 unexpected exit(s) in the previous 5 minute(s).This included 1 exit(s) due to OutOfMemory errors. Critical threshold: any.
Can you provide some instruction on how to find the issue and fix it? Please provide some detailed instructions to understand where to look in the config files and logs. Thanks!
Created 09-14-2017 11:15 AM
Created 09-14-2017 11:15 AM
Created 09-14-2017 12:21 PM
Perfect answer thanks!!
After increasing the Java Heap Size of HiveServer2 in Bytes to 1 GiB the query is running fine at 10,000 and 20,000 row limit and no Hive process crash so far. It was earlier at 50 MiB and when the query ran it used 250MiB java heap. However we still have some more setting for heap as below that I am not sure if better to bump up or not. I will look for a tuning guide for CDH. I do think the default values should not be set so low by Cloudera as none of the mainstream databases like Oracle, SQL Server would ever crash the server process due to a wayward SQL query just my opinion.
Spark Executor Maximum Java Heap Size
spark.executor.memory
HiveServer2 Default Group
256 MiB
Spark Driver Maximum Java Heap Size
spark.driver.memory
HiveServer2 Default Group
256 MiB
Client Java Heap Size in Bytes
Gateway Default Group
2 GiB
Java Heap Size of Hive Metastore Server in Bytes
Hive Metastore Server Default Group
50 MiB
Java Heap Size of HiveServer2 in Bytes
HiveServer2 Default Group
1 GiB
Java Heap Size of WebHCat Server in Bytes
WebHCat Server Default Group
50 MiB