Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1944 | 06-15-2020 05:23 AM | |
| 15822 | 01-30-2020 08:04 PM | |
| 2093 | 07-07-2019 09:06 PM | |
| 8175 | 01-27-2018 10:17 PM | |
| 4639 | 12-31-2017 10:12 PM |
08-15-2018
07:05 PM
do you mean that I need to replace my lines as log4j.appender.DRFA=org.apache.log4j.RollingFileAppender
log4j.appender.DRFA.rollingPolicy.FileNamePattern=${hive.log.dir}/${hive.log.file}-.%i.log.zip
log4j.appender.DRFA.MaxBackupIndex=10
log4j.appender.DRFA.MaxFileSize=1KB to
log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout log4j.appender.DRFA=org.apache.log4j.rolling.RollingFileAppender log4j.appender.DRFA.rollingPolicy=org.apache.log4j.rolling.FixedWindowRollingPolicy log4j.appender.DRFA.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy log4j.appender.DRFA.rollingPolicy.ActiveFileName=${hive.log.dir}/${hive.log.file}.log log4j.appender.DRFA.rollingPolicy.FileNamePattern=${hive.log.dir}/${hive.log.file}-.%i.log.gz log4j.appender.DRFA.triggeringPolicy.MaxFileSize=10000 log4j.appender.DRFA.rollingPolicy.maxIndex=10 ?
... View more
08-15-2018
06:34 PM
@Jay can yoiu help me regarding my last notes ?
... View more
08-15-2018
02:31 PM
I also configured the follwing ( without date ) but without zip alos log4j.appender.DRFA=org.apache.log4j.RollingFileAppender
log4j.appender.DRFA.rollingPolicy.FileNamePattern=${hive.log.dir}/${hive.log.file}-.%i.log.zip
log4j.appender.DRFA.MaxBackupIndex=10
log4j.appender.DRFA.MaxFileSize=1KB
... View more
08-15-2018
02:08 PM
hi Jay we configured the follwing , but still rotated files are not ziped whart is wrong in my log4j? hive.log.threshold=ALL
hive.root.logger=INFO,DRFA
hive.log.dir=${java.io.tmpdir}/${user.name}
hive.log.file=hive.log
# Define the root logger to the system property "hadoop.root.logger".
log4j.rootLogger=${hive.root.logger}, EventCounter
# Logging Threshold
log4j.threshold=${hive.log.threshold}
#
# Daily Rolling File Appender
#
# Use the PidDailyerRollingFileAppend class instead if you want to use separate log files
# for different CLI session.
#
# log4j.appender.DRFA=org.apache.hadoop.hive.ql.log.PidDailyRollingFileAppender
#log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File=${hive.log.dir}/${hive.log.file}
# Rollver at midnight
#log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
log4j.appender.DRFA=org.apache.log4j.RollingFileAppender
log4j.appender.DRFA.MaxBackupIndex=10
log4j.appender.DRFA.MaxFileSize=1KB
log4j.appender.DRFA.rollingPolicy.FileNamePattern=${hive.log.dir}/${hive.log.file}-.%d{yyyyMMdd}.log.gz
... View more
08-15-2018
01:22 PM
hi all we configure the HIVE , and log4j with RollingFileAppender log4j.appender.DRFA=org.apache.log4j.RollingFileAppender
log4j.appender.DRFA.MaxBackupIndex=10
log4j.appender.DRFA.MaxFileSize=1KB full details: # Define some default values that can be overridden by system properties
hive.log.threshold=ALL
hive.root.logger=INFO,DRFA
hive.log.dir=${java.io.tmpdir}/${user.name}
hive.log.file=hive.log
# Define the root logger to the system property "hadoop.root.logger".
log4j.rootLogger=${hive.root.logger}, EventCounter
# Logging Threshold
log4j.threshold=${hive.log.threshold}
#
# Daily Rolling File Appender
#
# Use the PidDailyerRollingFileAppend class instead if you want to use separate log files
# for different CLI session.
#
# log4j.appender.DRFA=org.apache.hadoop.hive.ql.log.PidDailyRollingFileAppender
#log4j.appender.DRFA=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFA.File=${hive.log.dir}/${hive.log.file}
# Rollver at midnight
#log4j.appender.DRFA.DatePattern=.yyyy-MM-dd
log4j.appender.DRFA=org.apache.log4j.RollingFileAppender
log4j.appender.DRFA.MaxBackupIndex=10
log4j.appender.DRFA.MaxFileSize=1KB
# 30-day backup
log4j.appender.DRFA.layout=org.apache.log4j.PatternLayout
# Pattern format: Date LogLevel LoggerName LogMessage
#log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n
# Debugging Pattern format
log4j.appender.DRFA.layout.ConversionPattern=%d{ISO8601} %-5p [%t]: %c{2} (%F:%M(%L)) - %m%n the logs from the machine are: -rw-r--r-- 1 hive hadoop 1113 Aug 15 13:10 hivemetastore.log.10
-rw-r--r-- 1 hive hadoop 1028 Aug 15 13:10 hivemetastore.log.9
-rw-r--r-- 1 hive hadoop 1070 Aug 15 13:11 hivemetastore.log.8
-rw-r--r-- 1 hive hadoop 1239 Aug 15 13:12 hiveserver2.log.10
-rw-r--r-- 1 hive hadoop 1154 Aug 15 13:13 hivemetastore.log.7
-rw-r--r-- 1 hive hadoop 1133 Aug 15 13:13 hivemetastore.log.6
-rw-r--r-- 1 hive hadoop 1055 Aug 15 13:15 hiveserver2.log.9
-rw-r--r-- 1 hive hadoop 1203 Aug 15 13:15 hiveserver2.log.8
-rw-r--r-- 1 hive hadoop 1098 Aug 15 13:15 hiveserver2.log.7
-rw-r--r-- 1 hive hadoop 1028 Aug 15 13:15 hiveserver2.log.6
-rw-r--r-- 1 hive hadoop 1239 Aug 15 13:15 hiveserver2.log.5
-rw-r--r-- 1 hive hadoop 1113 Aug 15 13:16 hivemetastore.log.5
-rw-r--r-- 1 hive hadoop 1028 Aug 15 13:16 hivemetastore.log.4
-rw-r--r-- 1 hive hadoop 1070 Aug 15 13:16 hivemetastore.log.3
-rw-r--r-- 1 hive hadoop 1048 Aug 15 13:18 hiveserver2.log.4
-rw-r--r-- 1 hive hadoop 1173 Aug 15 13:18 hiveserver2.log.3
-rw-r--r-- 1 hive hadoop 1157 Aug 15 13:18 hiveserver2.log.2
-rw-r--r-- 1 hive hadoop 1239 Aug 15 13:18 hiveserver2.log.1
-rw-r--r-- 1 hive hadoop 503 Aug 15 13:18 hiveserver2.log
-rw-r--r-- 1 hive hadoop 1154 Aug 15 13:19 hivemetastore.log.2
-rw-r--r-- 1 hive hadoop 1133 Aug 15 13:19 hivemetastore.log.1
-rw-r--r-- 1 hive hadoop 292 Aug 15 13:19 hivemetastore.log
-rw-r--r-- 1 hive hadoop 4904 Aug 15 13:20 hivemetastore-report.json.tmp
-rw-r--r-- 1 hive hadoop 4273 Aug 15 13:20 hiveserver2-report.json.tmp my question is - all roteated logs as hiveserver2.log.1 , hiveserver2.log.2 , etc are not zipped log what is the change in log4j that I need to do in order to ziped the roteated fuiles?
... View more
Labels:
08-15-2018
09:56 AM
can I add the syntax on the end of the line - so the final line should be like this ? export AMS_COLLECTOR_GC_OPTS="-XX:+UseConcMarkSweepGC -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:{{ams_collector_log_dir}}/collector-gc.log-`date +'%Y%m%d%H%M'` -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=4K"
... View more
08-15-2018
07:59 AM
Under ambari-metrics-collector we have many files as ollector-gc.log how to limit the number of this files to 5 max ? as I know we can set the follwing syntax in ambari configuration ( I guess ams-env ) -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=4K but I am not sure on which variable in ams-env we can set it ? /var/log/ambari-metrics-collector
4.0K collector-gc.log-201804251456
4.0K collector-gc.log-201804251517
4.0K collector-gc.log-201805011453
4.0K collector-gc.log-201805031000
4.0K collector-gc.log-201805061332
4.0K collector-gc.log-201805071101
4.0K collector-gc.log-201805211039
4.0K collector-gc.log-201806031253
4.0K collector-gc.log-201806041646
4.0K collector-gc.log-201806131159
4.0K collector-gc.log-201806131204
4.0K collector-gc.log-201806131219
4.0K collector-gc.log-201806240625
4.0K collector-gc.log-201806241235
4.0K collector-gc.log-201806241409
4.0K collector-gc.log-201806280803
4.0K collector-gc.log-201807160859
4.0K collector-gc.log-201807160908
4.0K collector-gc.log-201807160913
4.0K collector-gc.log-201807160930
4.0K collector-gc.log-201807160934
4.0K collector-gc.log-201807160937
4.0K collector-gc.log-201807161112
4.0K collector-gc.log-201807161114
4.0K collector-gc.log-201807161118
4.0K collector-gc.log-201807161130
4.0K collector-gc.log-201807161132
4.0K collector-gc.log-201807161137
4.0K collector-gc.log-201807161147
4.0K collector-gc.log-201807161152
4.0K collector-gc.log-201807161201
... View more
Labels:
- Labels:
-
Apache Ambari
08-14-2018
09:40 PM
in other words you recommended to create new folder on each machine
... View more
08-14-2018
07:50 PM
so what are the other places that I can use ?
... View more
08-14-2018
07:37 PM
hi all, we developed recently couple of shell scripts for cluster maintenance ( as clean logs , collect info , API script etc ) and script must be located in every Linux machine as ( worker machine , data node machine , name node machine ) my question is - were to locate these scripts on each machine? , so it will be safe in case of HDP upgrade or to avoid conflict for example , we check and we found that each machine have the following empty folder /var/lib/ambari-agent/lib/ and we are thinking to put all our scripts under /var/lib/ambari-agent/lib what hortonworks opinion about this , or where is the best location for our scripts that should be located ?
... View more
Labels: