Member since
09-15-2015
294
Posts
764
Kudos Received
81
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1580 | 07-27-2017 04:12 PM | |
4263 | 06-29-2017 10:50 PM | |
2005 | 06-21-2017 06:29 PM | |
2258 | 06-20-2017 06:22 PM | |
2038 | 06-16-2017 06:46 PM |
04-03-2017
10:58 PM
1 Kudo
@Simran Kaur- How are you trying to restart the Namenodes? Using command line or using Ambari. Also, as Romil has asked earlier, can you confirm if this HA or non HA cluster: In HA cluster you will have two Namenodes running as Active and Standy Namenodes. They can switch states to become Active and Standy In non HA environment there is only one Active Namenode. There is no switching of states. There is a Secondary Namenode, but that is mainly used for checkpointing purposes: http://blog.madhukaraphatak.com/secondary-namenode---what-it-really-do/
... View more
04-01-2017
05:31 PM
8 Kudos
For your question: are Ambari Metrics and SmartSense necessary? - No they are not mandatory, and you can ignore the start errors for them if you dont need them for now. Also, can you try restarting Hive, and as Jay mentioned paste what errors you see on trying to restart that
... View more
03-31-2017
11:59 PM
@Vinay Khandelwal - When you are shutting down the machine, the NameNode along with the Zkfc server will go down. The other NameNode will automatically failover to become the Active NameNode. There is no restart required here. Other question I have for you is how many DataNodes you have on your 3 node cluster. Was there a Datanode running on the host you shutdown as well?
... View more
03-31-2017
11:47 PM
7 Kudos
@SBandaru - Below is an excellent article on HCC explaining distcp with Snapshots: https://community.hortonworks.com/articles/71775/managing-hadoop-dr-with-distcp-and-snapshots.html From the article: Source must support 'snapshots' hdfs dfsadmin -allowSnapshot <path> Target is "read-only" Target, after initial baseline 'distcp' sync needs to support snapshots. Process Identify the source and target 'parent' directory Do not initially create the destination directory, allow the first distcp to do that. For example: If I want to sync source `/data/a` with `/data/a_target`, do *NOT* pre-create the 'a_target' directory. Allow snapshots on the source directory hdfs dfsadmin -allowSnapshot /data/a Create a Snapshot of /data/a hdfs dfs -createSnapshot /data/a s1 Distcp the baseline copy (from the atomic snapshot). Note: /data/a_target does NOT exists prior to the following command. hadoop distcp /data/a/.snapshot/s1 /data/a_target Allow snapshots on the newly create target directory hdfs dfsadmin -allowSnapshot /data/a_target At this point /data/a_target should be considered "read-only". Do NOT make any changes to the content here. Create a matching snapshot in /data/a_target that matches the name of the snapshot used to build the baseline hdfs dfs -createSnapshot /data/a_target s1 Add some content to the source directory /data/a. Make changes, add, deletes, etc. that need to be replicated to /data/a_target. Take a new snapshot of /data/a hdfs dfs -createSnapshot /data/a s2 Just for fun, check on whats changed between the two snapshots hdfs snapshotDiff /data/a s1 s2 Ok, now let's migrate the changes to /data/a_target hadoop distcp -diff s1 s2 -update /data/a /data/a_target When that's completed, finish the cycle by creating a matching snapshot on /data/a_target hdfs dfs -createSnapshot /data/a_target s2 That's it. You've completed the cycle. Rinse and repeat.
... View more
03-31-2017
03:01 AM
1 Kudo
@SBandaru - Its not able to find the snapshot of directory : Cannot find the snapshot of directory /tmp/sbandaru with name sbandaru Can you please ping how you created the snapshot, what was the location of the snapshot, and the command you issued for running distcp.
... View more
03-30-2017
09:33 PM
3 Kudos
@SBandaru - Lets say s1 was the earlier snapshot. You will need to create the latest snapshot (say s2) on source cluster like /usr/hdp/current/hadoop-hdfs-client/bin/hdfs dfs -createSnapshot /tmp/source s2 And then run distcp like below: /usr/hdp/current/hadoop-client/bin/hadoop distcp -update -diff s1 s2 /tmp/source /tmp/target Hope this helps
... View more
03-30-2017
09:26 PM
11 Kudos
Below post has one example script which deletes files older than certain days: https://community.hortonworks.com/questions/19204/do-we-have-any-script-which-we-can-use-to-clean-tm.html
#!/bin/bash
usage="Usage: dir_diff.sh [days]"
if [!"$1"]
then
echo$usage
exit1
fi
now=$(date +%s)
hadoop fs -ls /zone_encr2/ | grep "^d" | while read f; do
dir_date=`echo $f | awk '{print $6}'`
difference=$(( ( $now - $(date -d "$dir_date" +%s) ) / (24 * 60 * 60 ) ))
if [$difference-gt$1]; then
hadoop fs -ls `echo$f| awk '{ print $8 }'`;
fi
done
... View more
03-30-2017
08:12 PM
9 Kudos
@khadeer mhmd - You should use the following properties of the RollingFileAppender to efficiently control the Size and the number of backup index of old log files: maxFileSize:This is the critical size of the file above which the file will be rolled. Default value is 10 MB. maxBackupIndex: This property denotes the number of backup files to be created. Default value is 1. More details in : https://community.hortonworks.com/questions/89171/hdfs-audit-log-file-size-issues.html#comment-89639
... View more
03-30-2017
07:55 PM
1 Kudo
@Vinay Khandelwal - Namenode HA state is maintained by the Zkfc server running on the Namenode hosts. Can you please answer below questions: When you say "shut down one of the two machine", do you mean you only shutdown the Namenode or the entire machine. Are you shutting those down Zkfc servers well. Also, are you trying to restart using Ambari or Command Line and what is the HDP version you are using. And if possible can you please post Namenode logs. Thanks
... View more
03-30-2017
06:31 AM
1 Kudo
Have you updated the JAVA_HOME path in hadoop-env.sh file. Also, which component were you trying to restart and what was the order of restart if there were more than one components.
... View more