Member since
10-03-2016
9
Posts
1
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2588 | 11-08-2019 06:26 AM |
08-08-2020
09:17 AM
Thanks a tonn!!!! I was struggling with the same issue , it helped , Thanks
... View more
11-08-2019
06:26 AM
Hi @rguruvannagari Thanks for alot for the reply , not sure if heap space is filled during compaction or ranger hive audit if we set hive authentication to none then it is ok , please see the following issue. https://community.cloudera.com/t5/Support-Questions/hive-metastore-is-not-responding-but-alive-with-the/m-p/282224 Thanks Nag
... View more
11-07-2019
05:50 AM
hive metastore is consuming complete heap(tried max heap : 24G) after sometime the metastore is not responding , the metastore is ok if we disable ranger hive authentication this issue is reported in (https://jira.apache.org/jira/browse/HIVE-20568) , It seems ranger plugin is causing memory leaks during audit when it encounters hive db names containing '_' just wondering to know if there is a work around for it in HDP-3.1.0 or the upgrade from HDP-3.1 to HDP-3.1.4 will fix it, Thanks in advance. Current Version and config Information : ======================== HDP-3.1 hive-3.1 Hive Authentication - Ranger JDK-1.8 mysql -7.5 mysql-connector-8.0.13.jar
... View more
Labels:
- Labels:
-
Apache Hive
-
Apache Ranger
05-28-2019
10:50 AM
It seems it is not included in HDP yet please use pig as alternative
... View more
12-11-2018
02:30 PM
Hi Sahina just wondering if you had any solution for this issue.
... View more
10-03-2016
06:32 PM
1 Kudo
do the following to increase the dfs size : Create multiple directories or mount points in the hdfs data path : by default ambari deployed cluster contain /hadoop/hdfs/data as the data directory , so with root privileges : create a directory 1) mkdir /hadoop/hdfs/data1 2) chown -R hdfs:hadoop /hadoop/hdfs/data1 3) chmod -R 777 /hadoop/hdfs/data1 now edit the hdfs configuration : 1) on the cluster click on hdfs , click on configs , in the settings add the directory separated by comma under the hdfs.data.dir property : ex : /hadoop/hdfs/data, /hadoop/hdfs/data1 save the changes and restart the effected That will increase the disk space , to increase further repeat the same (or) lvs resize /hadoop/hdfs/data directory , do the following to increase the dfs size : Create multiple directories or mount points in the hdfs data path : by default ambari deployed cluster contain /hadoop/hdfs/data as the data directory , so with root privileges : create a directory 1) mkdir /hadoop/hdfs/data1 2) chown -R hdfs:hadoop /hadoop/hdfs/data1 3) chmod -R 777 /hadoop/hdfs/data1 now edit the hdfs configuration : 1) on the cluster click on hdfs , click on configs , in the settings add the directory separated by comma under the hdfs.data.dir property : ex : /hadoop/hdfs/data, /hadoop/hdfs/data1 save the changes and restart the effected That will increase the disk space , to increase further repeat the same (or) lvs resize /hadoop/hdfs/data directory
... View more