What all I did:
1. Increased the memory of NN
2. Increased he disk of overall cluster
3. Increased dfs blocksize from 64MB to 128MB
4. Increased the block count threshold.
if you have Cloudera manager , you could easily find the problem as to which job is creating lot of stress on the storage . Please take a peek in to the below link
I recoment to check which application team is causing it by using #hdfs dfs -count -v -h /project/*
If the FILE_COUNT is more than 10M, then its problem for mid size of cluster.
Please check the below link to reduce the block count.
For sizing a datanode heap it's similar to namenode heap, its recommend 1GB per 1M blocks. As a block could be as small a 1byte or as large as 128MB, the requirement of heap space is the same.