Created 10-30-2017 11:37 AM
we have ambari cluster with HIVE service
the ambari configuration required to delete the files under /var/log/hive , if files with the same name are more then 30
from HIVE config
hive_log_maxbackupindex=30
but when we access the master machines , we see more then 60 files with the same name
example:
cd /var/log/hive
ls -ltr | grep hivemetastore | wc -l
61
ls -ltr | grep hiveserver2 | wc -l
61
we also remove the remark from the line log4j.appender.DRFA.MaxBackupIndex , and restart the hive service
but this not help us
please advice what could be the problem ?
example of files under /var/log/hive
-rw-r--r--. 1 hive hadoop 2752 Sep 2 19:05 hivemetastore-report.json
-rw-r--r--. 1 hive hadoop 2756 Sep 2 19:05 hiveserver2-report.json
-rw-r--r--. 1 hive hadoop 636678 Sep 2 23:58 hiveserver2.log.2017-09-02
-rw-r--r--. 1 hive hadoop 1127874 Sep 2 23:59 hivemetastore.log.2017-09-02
-rw-r--r--. 1 hive hadoop 2369407 Sep 3 23:58 hiveserver2.log.2017-09-03
.
.
.
.
Created 10-30-2017 11:42 AM
Can you please check your ambari-server UI to see if by any chance the hive-log4j has the following line Commented? (if yes, then please try to uncomment it
Ambari UI --> Hive --> Configs --> Advanced --> "Advanced hive-log4"
# 30-day backup #log4j.appender.DRFA.MaxBackupIndex= {{hive_log_maxbackupindex}}
.
Once you uncomment the above section then try restarting the dependent services which shows "Restart Required" in ambari UI and then on the Hive server host please verify if the changes are reflected properly or not?
Example:
# grep 'MaxBackupIndex' /etc/hive/conf/hive-log4j.properties log4j.appender.DRFA.MaxBackupIndex= 30
.
Created 10-30-2017 11:57 AM
yes the parameter is - log4j.appender.DRFA.MaxBackupIndex=30 , and we already restart the hive service , but still files under /var/log/hive not deleted , what are the other checked that we need to do here ? , and what is the frequency that proccess need to delete the files ?