Member since
08-08-2017
1652
Posts
30
Kudos Received
11
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2003 | 06-15-2020 05:23 AM | |
| 16496 | 01-30-2020 08:04 PM | |
| 2155 | 07-07-2019 09:06 PM | |
| 8369 | 01-27-2018 10:17 PM | |
| 4744 | 12-31-2017 10:12 PM |
09-03-2018
04:06 PM
@Jonathan Sneep thank you so much
... View more
06-12-2019
08:20 PM
Is there any way to restart an ABORTED or FAILED request?
... View more
09-01-2018
01:27 AM
Nagios / OpsView / Sensu are popular options I've seen StatsD / CollectD / MetricBeat are daemon metric collectors (MetricBeat is somewhat tied to an Elasticsearch cluster though) that run on each server Prometheus is a popular option nowadays that would scrape metrics exposed by local service I have played around a bit with netdata, though I'm not sure if it can be applied for Hadoop monitoring use cases. DataDog is a vendor that offers lots of integrations such as Hadoop, YARN, Kafka, Zookeeper, etc. ... Realistically, you need some JMX + System monitoring tool, and a bunch exist
... View more
08-28-2018
12:32 PM
@Jay we run the service check but it fail on python time out , is any other idea to increase the logs ? second from where we get the - yarn-yarn-resourcemanager-.... files ? they are not written in the log4j so I not understand how they create -rw-r--r-- 1 yarn hadoop 1847 Aug 27 12:03 yarn-yarn-resourcemanager-master02.sys76.com.out.1
-rw-r--r-- 1 yarn hadoop 1052 Aug 27 12:05 yarn-yarn-resourcemanager-master02.sys76.com.log.10
-rw-r--r-- 1 yarn hadoop 1180 Aug 27 12:05 yarn-yarn-resourcemanager-master02.sys76.com.log.9
... View more
08-28-2018
04:11 PM
@Michael Bronson can you please select correct answer and end this thread?
... View more
10-16-2018
01:30 PM
I have a similar question. How do i bring down the size of the audit logs inside HDFS? I have the yarn plugin enabled,ed for Ranger but no policies defined. Daily log size that is generating is around 12 gigs. I changed the debug mode to info still not useful. Where and how can I make the changes to advanced-yarn-log4j? I already have referred to https://community.hortonworks.com/articles/8882/how-to-control-size-of-log-files-for-various-hdp-c.html didn't find it useful as there is no configuration for the advanced-yarn-log4j properties. Again, this is in HDFS. The logs are audit logs for YARN and we have no policies defined for YARN. , I have a question. I want to control the size of teh logs being generated under HDFS audits. That is the Yarn audit logs that are saved in the HDFS. I have no pilicies setup in YARN and even then the average file size for the daily logs is near about 12 gigs. I want to bring down the size of this log. What changes do I make and where? Again, this is for the audit logs getting saved in the HDFS. Ranger audit logs under /ranger/audit/yarn/
... View more
08-27-2018
09:36 AM
1 Kudo
@Michael Bronson Yes, it can be done.
... View more
08-19-2018
10:05 AM
1 Kudo
@Michael Bronson We see that https://bz.apache.org/bugzilla/show_bug.cgi?id=36384. which says that "Configuring triggering/rolling policies should be supported through properties" hence you will need to make sure that your are using the log4j JAR of version "log4j-1.2.17.jar" (instead of using the "log4j-1.2.15.jar") Hence make sure that your AMS collector is not using old version of log4j # mv /usr/lib/ambari-metrics-collector/log4j-1.2.15.jar /tmp/
# cp -f /usr/lib/ams-hbase/lib/log4j-1.2.17.jar /usr/lib/ambari-metrics-collector/ . Now also make sure to copy the "log4j-extras-1.2.17.jar" # cp -f /tmp/log4j_extras/apache-log4j-extras-1.2.17/apache-log4j-extras-1.2.17.jar /usr/lib/ambari-metrics-collector/ . Now edit the"ams-log4j" via ambari as following; Ambari UI --> Ambari Metrics --> Configs --> Advanced --> "Advanced ams-log4j" --> ams-log4j template (text area) . OLD Value # Direct log messages to a log file
log4j.appender.file=org.apache.log4j.RollingFileAppender
log4j.appender.file.File=${ams.log.dir}/${ams.log.file}
log4j.appender.file.MaxFileSize={{ams_log_max_backup_size}}MB
log4j.appender.file.MaxBackupIndex={{ams_log_number_of_backup_files}}
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n . CHANGED VALUE log4j.appender.file=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.file.rollingPolicy=org.apache.log4j.rolling.FixedWindowRollingPolicy
log4j.appender.file.rollingPolicy.maxIndex={{ams_log_number_of_backup_files}}
log4j.appender.file.rollingPolicy.ActiveFileName=${ams.log.dir}/${ams.log.file}
log4j.appender.file.rollingPolicy.FileNamePattern=${ams.log.dir}/${ams.log.file}-%i.gz
log4j.appender.file.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy
log4j.appender.file.triggeringPolicy.MaxFileSize=1048576
log4j.appender.file.layout=org.apache.log4j.PatternLayout
log4j.appender.file.layout.ConversionPattern=%d{ISO8601} %p %c: %m%n Notice: Here i am hard coding the value for property "log4j.appender.file.triggeringPolicy.MaxFileSize"for testing to something like "1048576" (around 1MB) because triggering policy does not accept values in KB/MB format hence i am putting the values in Bytes. You can have your own value defined there. .
... View more
08-16-2018
03:11 PM
1 Kudo
@Michael Bronson I think you can use Chaos Monkey which does all this stuff. (this will be at the hardware/network level but not at the hadoop components level directly) Do check this quick video demo: https://www.youtube.com/watch?v=AnSrpk_thDE
... View more
08-15-2018
08:53 PM
@Michael Bronson thats good - please mark the correct answer.
... View more