Created on 11-07-2019 05:50 AM - edited 11-07-2019 05:56 AM
hive metastore is consuming complete heap(tried max heap : 24G) after sometime the metastore is not responding , the metastore is ok if we disable ranger hive authentication this issue is reported in (https://jira.apache.org/jira/browse/HIVE-20568) ,
It seems ranger plugin is causing memory leaks during audit when it encounters hive db names containing '_'
just wondering to know if there is a work around for it in HDP-3.1.0 or the upgrade from HDP-3.1 to HDP-3.1.4 will fix it, Thanks in advance.
Current Version and config Information :
========================
HDP-3.1
hive-3.1
Hive Authentication - Ranger
JDK-1.8
mysql -7.5
mysql-connector-8.0.13.jar
Created 11-08-2019 06:26 AM
Thanks for alot for the reply , not sure if heap space is filled during compaction or ranger hive audit if we set hive authentication to none then it is ok , please see the following issue.
Thanks
Nag
Created 11-07-2019 04:49 PM
@nagaiik We dont have ranger plugin for Hive metastore. Ranger plugin is configured for Hive Server2 and runs as part of HS2 JVM.
If it is Hive metastore that hangs then it is not related to ranger. Heap dump of the process must be analyzed to identify what is causing high memory usage.
HIVE-20568 is the fix to not convert special characters in the table/db names. Due to the conversion of names, ranger authorization was not enforced as names are not same as the one set in policies. This doesnt have any workaround can be fixed on with upgrade to HDP-3.1.4 or contact cloudera hotfix. Refer the release notes for fixed issues.
https://docs.cloudera.com/HDPDocuments/HDP3/HDP-3.1.4/release-notes/content/fixed_issues.html
Created 11-08-2019 06:26 AM
Thanks for alot for the reply , not sure if heap space is filled during compaction or ranger hive audit if we set hive authentication to none then it is ok , please see the following issue.
Thanks
Nag