- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Hive process crash
- Labels:
-
Apache Hive
Created on ‎09-14-2017 10:33 AM - edited ‎09-16-2022 05:14 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We have an issue on the latest build CM and CDH5.12.1 Hive server. We can make the Hiveserver2 process crash by running select on a table with row limit 10000. Below rowlimit 3000 it doesnt crash. Below is the message in CM:
Sep 13 xx:xx:xx PM
Unexpected Exits Bad
Show
The health test result for HIVESERVER2_UNEXPECTED_EXITS has become bad: This role encountered 1 unexpected exit(s) in the previous 5 minute(s).This included 1 exit(s) due to OutOfMemory errors. Critical threshold: any.
Can you provide some instruction on how to find the issue and fix it? Please provide some detailed instructions to understand where to look in the config files and logs. Thanks!
Created ‎09-14-2017 11:15 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In CM, browse to the Hive service Configuration tab and search for 'Java Heap Size of HiveServer2 in Bytes'. I don't know what you have but increase it by 1 GB and test.
Created ‎09-14-2017 11:15 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
In CM, browse to the Hive service Configuration tab and search for 'Java Heap Size of HiveServer2 in Bytes'. I don't know what you have but increase it by 1 GB and test.
Created ‎09-14-2017 12:21 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Perfect answer thanks!!
After increasing the Java Heap Size of HiveServer2 in Bytes to 1 GiB the query is running fine at 10,000 and 20,000 row limit and no Hive process crash so far. It was earlier at 50 MiB and when the query ran it used 250MiB java heap. However we still have some more setting for heap as below that I am not sure if better to bump up or not. I will look for a tuning guide for CDH. I do think the default values should not be set so low by Cloudera as none of the mainstream databases like Oracle, SQL Server would ever crash the server process due to a wayward SQL query just my opinion.
Spark Executor Maximum Java Heap Size
spark.executor.memory
HiveServer2 Default Group
256 MiB
Spark Driver Maximum Java Heap Size
spark.driver.memory
HiveServer2 Default Group
256 MiB
Client Java Heap Size in Bytes
Gateway Default Group
2 GiB
Java Heap Size of Hive Metastore Server in Bytes
Hive Metastore Server Default Group
50 MiB
Java Heap Size of HiveServer2 in Bytes
HiveServer2 Default Group
1 GiB
Java Heap Size of WebHCat Server in Bytes
WebHCat Server Default Group
50 MiB
