I got below details through hadoop fsck /
Total size: 41514639144544 B (Total open files size: 581 B)
Total dirs: 40524
Total files: 124348 Total symlinks: 0 (Files currently being written: 7)
Total blocks (validated): 340802 (avg. block size 121814540 B) (Total open file blocks (not validated): 7) Minimally replicated blocks: 340802 (100.0 %)
I am usign 256MB block size. so 340802 blocks * 256 MB = 83.2TB * 3(replicas) =249.6 TB but in cloudera manager it shows 110 TB disk used. how is it possible?
Does this mean even though block size is 256MB, small file doesnt use the whole block for itself?