Support Questions

Find answers, ask questions, and share your expertise

Hive process crash

avatar
Expert Contributor

We have an issue on the latest build CM and CDH5.12.1 Hive server. We can make the Hiveserver2 process crash by running select on a table with row limit 10000. Below rowlimit 3000 it doesnt crash. Below is the message in CM:
Sep 13 xx:xx:xx PM
Unexpected Exits Bad
Show
The health test result for HIVESERVER2_UNEXPECTED_EXITS has become bad: This role encountered 1 unexpected exit(s) in the previous 5 minute(s).This included 1 exit(s) due to OutOfMemory errors. Critical threshold: any.

Can you provide some instruction on how to find the issue and fix it? Please provide some detailed instructions to understand where to look in the config files and logs. Thanks!

1 ACCEPTED SOLUTION

avatar
Champion
You need to increase the HS2 heap size as whatever it is at is too low to process and return that much data for your query.

In CM, browse to the Hive service Configuration tab and search for 'Java Heap Size of HiveServer2 in Bytes'. I don't know what you have but increase it by 1 GB and test.

View solution in original post

2 REPLIES 2

avatar
Champion
You need to increase the HS2 heap size as whatever it is at is too low to process and return that much data for your query.

In CM, browse to the Hive service Configuration tab and search for 'Java Heap Size of HiveServer2 in Bytes'. I don't know what you have but increase it by 1 GB and test.

avatar
Expert Contributor

Perfect answer thanks!!

After increasing the Java Heap Size of HiveServer2 in Bytes to 1 GiB the query is running fine at 10,000 and 20,000 row limit and no Hive process crash so far. It was earlier at 50 MiB and when the query ran it used 250MiB java heap. However we still have some more setting for heap as below that I am not sure if better to bump up or not. I will look for a tuning guide for CDH. I do think the default values should not be set so low by Cloudera as none of the mainstream databases like Oracle, SQL Server would ever crash the server process due to a wayward SQL query just my opinion.

 

Spark Executor Maximum Java Heap Size
spark.executor.memory
HiveServer2 Default Group

256 MiB

Spark Driver Maximum Java Heap Size
spark.driver.memory
HiveServer2 Default Group

256 MiB

Client Java Heap Size in Bytes
Gateway Default Group

2 GiB

Java Heap Size of Hive Metastore Server in Bytes
Hive Metastore Server Default Group

50 MiB

Java Heap Size of HiveServer2 in Bytes
HiveServer2 Default Group

1 GiB

Java Heap Size of WebHCat Server in Bytes
WebHCat Server Default Group

50 MiB