Member since
03-21-2016
38
Posts
5
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
17340 | 01-29-2017 09:04 AM | |
1975 | 01-04-2017 08:19 AM | |
1079 | 10-12-2016 05:36 AM | |
2000 | 10-12-2016 05:26 AM |
06-01-2017
10:17 AM
@English Rose
Yes I confirm that the settings which I mentioned is working properly.
... View more
01-30-2017
10:42 AM
@David Streever @Daniel Kozlowski Since my NN are in HA mode, so I have to provide the HA name in xasecure.audit.destination.hdfs.property as hdfs://cluster-nameservice:8020/ranger/audit. Also I added a new property xasecure.audit.destination.hdfs.batch.filespool.archive.dir=/var/log/hadoop/hdfs/audit/hdfs/spool/archive Now logs are coming in archive folder but the files are very big. [root@meybgdlpmst3] # ls -lh /var/log/hadoop/hdfs/audit/hdfs/spool/archive
total 14G
-rw-r--r-- 1 hdfs hadoop 6.0G Jan 29 03:44 spool_hdfs_20170128-0344.33.log
-rw-r--r-- 1 hdfs hadoop 7.9G Jan 30 03:44 spool_hdfs_20170129-0344.37.log Now, is there any property for the log files so that they get compressed automatically as increasing the logs everyday in this stream will make the disk full at one day?? Thanks, Rahul
... View more
01-29-2017
09:04 AM
Hi Team and @Sagar Shimpi, The below steps helped me to resolve the issue. 1) As I am using HDP-2.4.2, so I need to download the jar from http://www.apache.org/dyn/closer.cgi/logging/log4j/extras/1.2.17/apache-log4j-extras-1.2.17-bin.tar.gz 2) Extract the tar file and copy apache-log4j-extras-1.2.17.jar to ALL the cluster nodes in /usr/hdp/<version>/hadoop-hdfs/lib location. Note: Also you can find apache-log4j-extras-1.2.17.jar in /usr/hdp/<version>/hive/lib folder. I found it later. 3) Then edit in advanced hdfs-log4j property from ambari and replace the default hdfs-audit log4j properties as hdfs.audit.logger=INFO,console
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
log4j.appender.DRFAAUDIT=org.apache.log4j.rolling.RollingFileAppender
log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c: %m%
log4j.appender.DRFAAUDIT.rollingPolicy=org.apache.log4j.rolling.FixedWindowRollingPolicy
log4j.appender.DRFAAUDIT.rollingPolicy.maxIndex=30
log4j.appender.DRFAAUDIT.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy
log4j.appender.DRFAAUDIT.triggeringPolicy.MaxFileSize=16106127360
## The figure 16106127360 is in bytes which is equal to 15GB ##
log4j.appender.DRFAAUDIT.rollingPolicy.ActiveFileName=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.DRFAAUDIT.rollingPolicy.FileNamePattern=${hadoop.log.dir}/hdfs-audit-%i.log.gz
The output of the hdfs audit log files in .gz are: -rw-r--r-- 1 hdfs hadoop 384M Jan 28 23:51 hdfs-audit-2.log.gz
-rw-r--r-- 1 hdfs hadoop 347M Jan 29 07:40 hdfs-audit-1.log.gz
... View more
01-27-2017
07:17 AM
Hi Team, The audit hdfs spool log files comes directly in /var/log/hadoop/hdfs/audit/hdfs/spool directory. [root@meybgdlpmst3] # pwd
/var/log/hadoop/hdfs/audit/hdfs/spool
[root@meybgdlpmst3(172.23.34.6)] # ls -lh
total 20G
drwxr-xr-x 2 hdfs hadoop 4.0K Jan 7 06:57 archive
-rw-r--r-- 1 hdfs hadoop 23K Jan 26 14:30 index_batch_batch.hdfs_hdfs_closed.json
-rw-r--r-- 1 hdfs hadoop 6.1K Jan 27 11:05 index_batch_batch.hdfs_hdfs.json
-rw-r--r-- 1 hdfs hadoop 7.8G Jan 25 03:43 spool_hdfs_20170124-0343.41.log
-rw-r--r-- 1 hdfs hadoop 6.6G Jan 26 03:43 spool_hdfs_20170125-0343.43.log
-rw-r--r-- 1 hdfs hadoop 3.9G Jan 27 03:44 spool_hdfs_20170126-0344.05.log
-rw-r--r-- 1 hdfs hadoop 1.6G Jan 27 11:05 spool_hdfs_20170127-0344.22.log
[root@meybgdlpmst3] # ll archive/
total 0
Attached is the above spool directories configured under ranger-hdfs-audit section, but still the log files doesn't comes under archive folder and hence consumes too much disk space. Is there any additional configuration needs to be done?? Any help will be highly appreciated. Thanks, Rahul
... View more
Labels:
- Labels:
-
Apache Hadoop
01-20-2017
09:39 AM
@Sagar Shimpi I checked the above links but I didn't find how to compress and zip the log file automatically if it reaches the specified MaxFileSize. I need to compress the log files and keep it upto 30days which after then should get deleted automatically. So what are the additional properties should I need to add to make .gz files for hdfs-audit logs?? At present my property is set as: hdfs.audit.logger=INFO,console
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
log4j.appender.DRFAAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd
log4j.appender.DRFAAUDIT.MaxFileSize=1GB
log4j.appender.DRFAAUDIT.MaxBackupIndex=30
... View more
01-19-2017
02:40 PM
Hi Team, I want to rotate and archive(in .gz) hdfs-audit log files on size based but after reaching 350KB of size, the file is not getting archived. The properties I have set in hdfs-log4j is: hdfs.audit.logger=INFO,console
log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger}
log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false
#log4j.appender.DRFAAUDIT=org.apache.log4j.DailyRollingFileAppender
log4j.appender.DRFAAUDIT.triggeringPolicy=org.apache.log4j.rolling.SizeBasedTriggeringPolicy
log4j.appender.DRFAAUDIT=org.apache.log4j.RollingFileAppender
log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log
log4j.appender.DRFAAUDIT.rollingPolicy.FileNamePattern=hdfs-audit-%d{yyyy-MM}.gz
log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout
log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n
log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd
log4j.appender.DRFAAUDIT.MaxFileSize=350KB
log4j.appender.DRFAAUDIT.MaxBackupIndex=9 Any help will be highly appreciated.
... View more
Labels:
- Labels:
-
Apache Hadoop
01-04-2017
08:19 AM
Finally I am able to resolve the issue. 1. I changed the script extension from ambari_ldap_sync_all.sh to ambari_ldap_sync_all.exp 2. I also changed the absolute path of ambari-server as /usr/sbin/ambari-server and added exit statement at the end of script. #!/usr/bin/expect
spawn /usr/sbin/ambari-server sync-ldap --existing
expect "Enter Ambari Admin login:"
send "admin\r"
expect "Enter Ambari Admin password:"
send "admin\r"
expect eof
exit 3. Finally inside the crontab, I made the entry as 0 23 * * * /usr/bin/expect /opt/ambari_ldap_sync_all.exp
... View more
12-27-2016
07:41 AM
@Jay SenSharma Yes expect package is already installed. Actually my issue is the script is not getting executed at 3pm even though it is configured in crontab. As I said, when I ran the command manually as ./ambari_ldap_sync_all.sh then it works. So is there any alternative how the script can get executed automatically from crontab?
... View more
12-27-2016
07:22 AM
Hi Team, I have used ambari-ldap sync script but I get the following
error when I ran the below command. One thing I noticed is that if the
run the script manually as ./ambari_ldap_sync_all.sh then its getting
executed. Also I have shown my ambari-ldap sync script below. So the script is
not getting executed from crontab with 'sh' command . [root@host1(172.23.34.4)] # sh ambari_ldap_sync_all.sh
ambari_ldap_sync_all.sh: line 3: spawn: command not found
couldn't read file "Enter Ambari Admin login:": no such file or directory
ambari_ldap_sync_all.sh: line 7: send: command not found
couldn't read file "Enter Ambari Admin password:": no such file or directory
ambari_ldap_sync_all.sh: line 11: send: command not found
couldn't read file "eof": no such file or directory
[root@host1(172.23.34.4)] # cat ambari_ldap_sync_all.sh
#!/usr/bin/expect
spawn ambari-server sync-ldap --existing
expect "Enter Ambari Admin login:"
send "admin\r"
expect "Enter Ambari Admin password:"
send "admin\r"
expect eof
[root@host1(172.23.34.4)] # crontab -e
00 15 * * * /ambari_ldap_sync_all.sh
Can someone help me how to write expect script if it is required??
... View more
Labels:
- Labels:
-
Apache Ambari
12-23-2016
11:41 AM
@Neeraj Sabharwal Hi Neeraj, I have used your ambari-ldap sync script but I get the following error when I ran the below command. One thing I noticed is that if the run the script manually as ./ambari_ldap_sync_all.sh then its getting executed. Also I have shown my ambari-ldap sync script below. So the script is not getting executed from crontab with 'sh' command . Please help. [root@host1(172.23.34.4)] # sh ambari_ldap_sync_all.sh
ambari_ldap_sync_all.sh: line 3: spawn: command not found
couldn't read file "Enter Ambari Admin login:": no such file or directory
ambari_ldap_sync_all.sh: line 7: send: command not found
couldn't read file "Enter Ambari Admin password:": no such file or directory
ambari_ldap_sync_all.sh: line 11: send: command not found
couldn't read file "eof": no such file or directory
[root@host1(172.23.34.4)] # cat ambari_ldap_sync_all.sh
#!/usr/bin/expect
spawn ambari-server sync-ldap --existing
expect "Enter Ambari Admin login:"
send "admin\r"
expect "Enter Ambari Admin password:"
send "admin\r"
expect eof
[root@host1(172.23.34.4)] # crontab -e
00 15 * * * /ambari_ldap_sync_all.sh
... View more