05-11-2017 11:58 AM
What all I did:
1. Increased the memory of NN
2. Increased he disk of overall cluster
3. Increased dfs blocksize from 64MB to 128MB
4. Increased the block count threshold.
12-06-2018 01:44 PM
I recoment to check which application team is causing it by using #hdfs dfs -count -v -h /project/*
If the FILE_COUNT is more than 10M, then its problem for mid size of cluster.
Please check the below link to reduce the block count.
12-06-2018 01:50 PM
For sizing a datanode heap it's similar to namenode heap, its recommend 1GB per 1M blocks. As a block could be as small a 1byte or as large as 128MB, the requirement of heap space is the same.