Member since
03-21-2016
38
Posts
5
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
17374 | 01-29-2017 09:04 AM | |
1987 | 01-04-2017 08:19 AM | |
1081 | 10-12-2016 05:36 AM | |
2001 | 10-12-2016 05:26 AM |
01-30-2017
05:05 PM
Rahul, Are the logs making it to HDFS? It sounds like you might be combining the "spooling" directory with the "local audit archive directory". What properties did you use during the Ranger HDFS Plugin installation? Are you doing a manual install or using Ambari? If manual, then this reference might help: http://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.5.3/bk_command-line-installation/content/installing_ranger_plugins.html#installing_ranger_hdfs_plugin I wasn't able to locate your "...filespool.archive.dir" property on my cluster. I'm not sure the property is required. And may be responsible for keeping the files "locally" that you've already posted to HDFS. If the files are making it to HDFS, I would try removing this setting. What do you have set for the property below? And are the contents being flushed from that location on a regular basis? xasecure.audit.destination.hdfs.batch.filespool.dir Compression doesn't happen during this process. Once they're on HDFS, you're free to do with them as you see fit. If compression is a part of that, then write an MR job to do so. (WARNING: Could affect other systems that might want to use these files as is) Cheers, David
... View more
04-16-2019
03:54 PM
i did following changes,but hdfs-audit logs are not rotating, hdfs.audit.logger=INFO,console log4j.logger.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=${hdfs.audit.logger} log4j.additivity.org.apache.hadoop.hdfs.server.namenode.FSNamesystem.audit=false #log4j.appender.DRFAAUDIT=org.apache.log4j.DailyRollingFileAppender log4j.appender.DRFAAUDIT=org.apache.log4j.RollingFileAppender log4j.appender.DRFAAUDIT.File=${hadoop.log.dir}/hdfs-audit.log log4j.appender.DRFAAUDIT.layout=org.apache.log4j.PatternLayout log4j.appender.DRFAAUDIT.layout.ConversionPattern=%d{ISO8601} %p %c{2}: %m%n log4j.appender.DRFAAUDIT.DatePattern=.yyyy-MM-dd log4j.appender.DRFAAUDIT.MaxFileSize=100MB log4j.appender.DRFAAUDIT.MaxBackupIndex=5
... View more
01-04-2017
08:19 AM
Finally I am able to resolve the issue. 1. I changed the script extension from ambari_ldap_sync_all.sh to ambari_ldap_sync_all.exp 2. I also changed the absolute path of ambari-server as /usr/sbin/ambari-server and added exit statement at the end of script. #!/usr/bin/expect
spawn /usr/sbin/ambari-server sync-ldap --existing
expect "Enter Ambari Admin login:"
send "admin\r"
expect "Enter Ambari Admin password:"
send "admin\r"
expect eof
exit 3. Finally inside the crontab, I made the entry as 0 23 * * * /usr/bin/expect /opt/ambari_ldap_sync_all.exp
... View more
11-13-2016
01:07 PM
@Rahul Buragohain Some version of Oracle JDK also might be affected with this bug. However following are some additional information about this issue. When JPS command is executed as root, it tries to read the process information from "/tmp/hsperfdata_$username_$ProcessID" file. Before reading the process file or directory, it checks if the file or directory is secure or not. It opens the user directory and match the UID of that directory (which belong to other user) with the current process(root-jps) effective ID,which gets fail and process returns failure. Also what output (java path) do you see when you are running the following command ? ps -ef | grep java .
... View more
01-20-2017
08:42 PM
@jss @Rahul Buragohain I have the same issue with my HDP 2.4.2... where exactly do i change these parameters?? I see them in hadoop-env template with: SHARED_HADOOP_NAMENODE_OPTS="-server -XX:ParallelGCThreads=8 -XX:+UseConcMarkSweepGC -XX:ErrorFile={{hdfs_log_dir_prefix}}/$USER/hs_err_pid%p.log -XX:NewSize={{namenode_opt_newsize}} -XX:MaxNewSize={{namenode_opt_maxnewsize}} -Xloggc:{{hdfs_log_dir_prefix}}/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -XX:CMSInitiatingOccupancyFraction=70 -XX:+UseCMSInitiatingOccupancyOnly -Xms{{namenode_heapsize}} -Xmx{{namenode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT"
export HADOOP_NAMENODE_OPTS="${SHARED_HADOOP_NAMENODE_OPTS} -XX:OnOutOfMemoryError=\"/usr/hdp/current/hadoop-hdfs-namenode/bin/kill-name-node\" -Dorg.mortbay.jetty.Request.maxFormContentSize=-1 ${HADOOP_NAMENODE_OPTS}"
export HADOOP_DATANODE_OPTS="-server -XX:ParallelGCThreads=4 -XX:+UseConcMarkSweepGC -XX:ErrorFile=/var/log/hadoop/$USER/hs_err_pid%p.log -XX:NewSize=200m -XX:MaxNewSize=200m -Xloggc:/var/log/hadoop/$USER/gc.log-`date +'%Y%m%d%H%M'` -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -XX:+PrintGCDateStamps -Xms{{dtnode_heapsize}} -Xmx{{dtnode_heapsize}} -Dhadoop.security.logger=INFO,DRFAS -Dhdfs.audit.logger=INFO,DRFAAUDIT ${HADOOP_DATANODE_OPTS}" If this is the file, should i just add the mentioned parameters in the HADOOP_DATANODE_OPTS ?? and do i need to restart the hdfs service? Thanks.
... View more
10-12-2016
02:01 PM
Thank you @Rahul Buragohain for letting us know. Please select any best answer for the others to follow how this problem was fixed. Thanks.
... View more
10-12-2016
05:36 AM
1 Kudo
@Constantin Stanca Hi Constantin, The issue was that a hadoop folder got created previously under /usr/hdp folder since there should be only 2 folders named 2.4.2.0-258 and current under /usr/hdp. There should not be any additional folders apart from two folders. After removing the hadoop folder from /usr/hdp, the issue got resolved. Thanks, Rahul
... View more
08-25-2016
02:50 PM
@Rahul Buragohain This has been incorporated in the next official release. Below is the Apache JIRA details: https://issues.apache.org/jira/browse/RANGER-803
... View more
04-21-2017
10:28 AM
The unexpected benefit of this is that nobody will ever forget the LDAP password again: not will it be included in your favourite shell's history file, but anyone who can log in on that node will also be able to see those options by keeping an eye on ps. Isn't that neat? Don't do this, kids. Never write passwords on the command line.
... View more