Member since
06-06-2016
185
Posts
12
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2700 | 07-20-2016 07:47 AM | |
2257 | 07-12-2016 12:59 PM |
08-16-2016
02:04 AM
@mqureshi NICI Nice..i got the answer thanks you so much...
... View more
08-05-2016
07:18 PM
@Jitendra Yadav Thanks you .. i will let you the result after restart ambari server..
... View more
08-05-2016
07:11 PM
I'm not aware of an existing script already in HDP to do this for you. However, I did run across this: https://github.com/nmilford/clean-hadoop-tmp Note, that script is written in ruby. You could follow the logic an write it in Python, Perl or Bash.
... View more
08-02-2016
12:58 PM
@mqureshi Thank you so much i really appreciate your response.. .user access in DEV is cleared but only still issue there PROD... Its has below error log 16/08/02 01:09:12 INFO tez.TezSessionState: User of session id baacb86a-e54d-4fae-8c16-fff9834d4d8a is y919122
16/08/02 01:11:03 INFO tez.DagUtils: Jar dir is null/directory doesn't exist. Choosing HIVE_INSTALL_DIR - hdfs:/user/y919122/.hiveJars
16/08/02 01:12:52 ERROR thrift.ProcessFunction: Internal error processing query
java.lang.OutOfMemoryError: Java heap space its is says that java heap space ..but i could run same query in CLI so what should i do in HUE(hive beewax)?
... View more
07-25-2016
11:29 AM
Thank you rahul..how can i disable this from ranger? can you explain in wide?
... View more
07-28-2016
01:19 PM
Yes you have to run fsk command https://community.hortonworks.com/articles/4427/fix-under-replicated-blocks-in-hdfs-manually.html
... View more
07-18-2016
10:04 AM
@rguruvannagar Thanks you but my concern is..should i add this property in off-hours? while i restart the serveries will effect the current running jobs
... View more
07-12-2016
03:04 PM
1 Kudo
looks like you have selected your own answer 🙂
... View more
07-08-2016
05:19 PM
@sankar rao you shouldn't wipe the entire /tmp directory, this would affect your current jobs indeed. There's no builtin way to do that but you can cron a job which deletes the files/directories older than x days You'll find some examples around, here is a shell (dirty but efficient) easy way for cleaning up files only: #!/bin/bash
usage="Usage: dir_diff.sh [days]"
if [ ! "$1" ]
then
echo $usage
exit 1
fi
now=$(date +%s)
hadoop fs -ls -R /tmp/ | grep "^-" | while read f; do
dir_date=`echo $f | awk '{print $6}'`
difference=$(( ( $now - $(date -d "$dir_date" +%s) ) / (24 * 60 * 60 ) ))
if [ $difference -gt $1 ]; then
hdfs dfs -rm -f $(echo $f | awk '{print $NF}');
fi
done
... View more
01-16-2018
02:13 PM
Additional details to Ben Leonhardi's response including the formula for Quorum Calculation: http://bytecontinnum.com/zookeeper-always-configured-odd-number-nodes/
... View more
- « Previous
- Next »