Support Questions
Find answers, ask questions, and share your expertise
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

hadoop -count returning wrong result

hadoop -count returning wrong result

New Contributor

Hi All,


As part of our cloudera BDR backup & restore validation,we use the below commad to  verify the back up and restored files are same.


hdfs dfs -count /data


before start the replication schedule . my /data directory in source cluster contains 6982 directories and 10,887 files. Please see the result of the hdfs count command 

[user@example ~]$ hdfs dfs -count /data
6982 10,887 11897305288 /data


[user@example~]$ hdfs dfs -ls -R /data | wc -l


 we had run replication(via distcp command line)maually, due to some space crunch on the remote server the distcp job was failed. then we run below command to check the hdfs count 


[user@example tmp]$ hdfs dfs -count /data
6982 21756 11940958360 /data


[user@example tmp]$ hdfs dfs -ls -R /data | wc -l


There was a devation in the file count before the operation,almost the file count increased double. However

ls -R result giving the actual count (6982 +10,887).


Ideally the output of hdfs dfs -count command should returns with 10,887 files and 6982 directories.


What could be the reason for this inconsistent result? We did restart the cluster suspecting some chache but despite that the counts mentioned above was consitent.


Thanks in advance,



Re: hadoop -count returning wrong result

If you have enterprise version you could download the disk usage report and verify it and see which folder has the most of the files .

Re: hadoop -count returning wrong result

Master Collaborator

Hi,  I think it is related to the snapshots or hidden directories. Maybe the distcp is preparing a snapshot, and as it failed, it left these temporary objects in HDFS. 

Re: hadoop -count returning wrong result


I encountred the same issue hdfs dfs -count return incorrect file count. The directory has 76 files but the -count report 77 files. The count CONTENT_SIZE match total sum of individual files in the directory.


I think it is a bug in the -count operation report incorrect file count.

Any comments from experts here?


$ hdfs dfs -count -v /PROJECTS/flume_data/dirname1/2018/11/27/12
           1           77              78855 /PROJECTS/flume_data/dirname1/2018/11/27/12

$ hdfs dfs -ls -R /PROJECTS/flume_data/dirname1/2018/11/27/12 | wc -l

$ hdfs dfs -du -s -x /PROJECTS/flume_data/dirname1/2018/11/27/12
78855  236565  /PROJECTS/flume_data/dirname1/2018/11/27/12

##manual sum individual file size in the directory
$ hdfs dfs -du -x /PROJECTS/flume_data/dirname1/2018/11/27/12 | awk {'print $1'} | sed 's/$/+/g' | tr -d '\n' | sed 's/$/0\n/' | bc

$ hdfs dfs -du -x /PROJECTS/flume_data/dirname1/2018/11/27/12 | wc -l



Re: hadoop -count returning wrong result


do you have HA configured for Namenode ? by any chance . Also you double check the results in Namenode UI 

Re: hadoop -count returning wrong result


Namenode UI will provide total files and directories is there a way we can see the number of files by directory using namenode UI?



Re: hadoop -count returning wrong result

New Contributor

I had exactly the same issue and turned out that the count includes also snapshot. To check if that's the case one can add -x option in the count, e.g.:


hdfs dfs -count -v -h -x   /user/hive/warehouse/my_schema.db/*

Don't have an account?
Coming from Hortonworks? Activate your account here