Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1922 | 06-15-2020 05:23 AM | |
| 15491 | 01-30-2020 08:04 PM | |
| 2074 | 07-07-2019 09:06 PM | |
| 8122 | 01-27-2018 10:17 PM | |
| 4575 | 12-31-2017 10:12 PM |
08-28-2018
04:11 PM
@Michael Bronson can you please select correct answer and end this thread?
... View more
10-16-2018
01:30 PM
I have a similar question. How do i bring down the size of the audit logs inside HDFS? I have the yarn plugin enabled,ed for Ranger but no policies defined. Daily log size that is generating is around 12 gigs. I changed the debug mode to info still not useful. Where and how can I make the changes to advanced-yarn-log4j? I already have referred to https://community.hortonworks.com/articles/8882/how-to-control-size-of-log-files-for-various-hdp-c.html didn't find it useful as there is no configuration for the advanced-yarn-log4j properties. Again, this is in HDFS. The logs are audit logs for YARN and we have no policies defined for YARN. , I have a question. I want to control the size of teh logs being generated under HDFS audits. That is the Yarn audit logs that are saved in the HDFS. I have no pilicies setup in YARN and even then the average file size for the daily logs is near about 12 gigs. I want to bring down the size of this log. What changes do I make and where? Again, this is for the audit logs getting saved in the HDFS. Ranger audit logs under /ranger/audit/yarn/
... View more
08-27-2018
09:36 AM
1 Kudo
@Michael Bronson Yes, it can be done.
... View more
08-19-2018
10:05 AM
1 Kudo
@Michael Bronson We see that https://bz.apache.org/bugzilla/show_bug.cgi?id=36384. which says that "Configuring triggering/rolling policies should be supported through properties" hence you will need to make sure that your are using the log4j JAR of version "log4j-1.2.17.jar" (instead of using the "log4j-1.2.15.jar") Hence make sure that your AMS collector is not using old version of log4j # mv /usr/lib/ambari-metrics-collector/log4j-1.2.15.jar /tmp/
# cp -f /usr/lib/ams-hbase/lib/log4j-1.2.17.jar /usr/lib/ambari-metrics-collector/ . Now also make sure to copy the "log4j-extras-1.2.17.jar" # cp -f /tmp/log4j_extras/apache-log4j-extras-1.2.17/apache-log4j-extras-1.2.17.jar /usr/lib/ambari-metrics-collector/ . Now edit the"ams-log4j" via ambari as following; Ambari UI --> Ambari Metrics --> Configs --> Advanced --> "Advanced ams-log4j" --> ams-log4j template (text area) . OLD Value # Direct log messages to a log file
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=${ams.log.dir}/${ams.log.file}
log4j.appender.file.MaxFileSize={{ams_log_max_backup_size}}MB
log4j.appender.file.MaxBackupIndex={{ams_log_number_of_backup_files}}
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n . CHANGED VALUE log4j.appender.file=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.file.rollingPolicy=org.apache.log4j.rolling.FixedWindowRollingPolicy
log4j.appender.file.rollingPolicy.maxIndex={{ams_log_number_of_backup_files}}
log4j.appender.file.rollingPolicy.ActiveFileName=${ams.log.dir}/${ams.log.file}
log4j.appender.file.rollingPolicy.FileNamePattern=${ams.log.dir}/${ams.log.file}-%i.gz
log4j.appender.file.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy
log4j.appender.file.triggeringPolicy.MaxFileSize=1048576
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n Notice: Here i am hard coding the value for property "log4j.appender.file.triggeringPolicy.MaxFileSize"for testing to something like "1048576" (around 1MB) because triggering policy does not accept values in KB/MB format hence i am putting the values in Bytes. You can have your own value defined there. .
... View more
08-16-2018
03:11 PM
1 Kudo
@Michael Bronson I think you can use Chaos Monkey which does all this stuff. (this will be at the hardware/network level but not at the hadoop components level directly) Do check this quick video demo: https://www.youtube.com/watch?v=AnSrpk_thDE
... View more
08-15-2018
08:53 PM
@Michael Bronson thats good - please mark the correct answer.
... View more
08-15-2018
11:31 AM
Hi @Michael Bronson , Yeah exactly thats what i did.
... View more
08-15-2018
04:28 PM
@Michael Bronson Can you please mark correct answer if you are satisfied with my answer?
... View more
08-14-2018
06:01 PM
@Michael Bronson Yes, as of now Ambari 2.7 is the latest version and it is certified to use with HDP 3.0 If it answer my question then please mark this as correct answer.
... View more
08-13-2018
10:36 PM
@Michael Bronson For Kafka its not even enabled GC logging I guess - hence you can ignore that part. but you can set that like below. In kafka-env section, From export KAFKA_KERBEROS_PARAMS="-Djavax.security.auth.useSubjectCredsOnly=false to: export KAFKA_KERBEROS_PARAMS="-Djavax.security.auth.useSubjectCredsOnly=false -XX:+PrintGCTimeStamps -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=5 -XX:GCLogFileSize=2M If you satisfied with answers then please mark my comment as correct answer. this will help others as well.
... View more