Member since
08-08-2015
8
Posts
0
Kudos Received
0
Solutions
04-13-2018
11:36 PM
Have you clean up files under dfs.datanode.data.dirs that is not being written by HDFS for blocks? If not, the non-dfs used won't change. Similar question has been answered here: https://community.hortonworks.com/questions/42122/hdfs-non-dfs-used.html.
... View more
03-26-2018
07:45 PM
Please accept the answer if it fixes your problem.
... View more
12-10-2015
10:20 PM
1 Kudo
Golden rule for MRv2 an Hadoop cluster should always be an odd number of data nodes 3,5,7,9 etc because of the distributed workload architecture any failed job is automatically restarted on the surviving data nodes. remember to configure the mapred.sites.xml parameter mapreduce.jobtracker.restart.recover parameter to TRUE and dont forget to set the number of tries in the mapreduce.map.maxattempts parameter in the mapred-default.xml
... View more