Created 08-09-2018 07:16 AM
Hi All,
As part of our cloudera BDR backup & restore validation,we use the below commad to verify the back up and restored files are same.
hdfs dfs -count /data
before start the replication schedule . my /data directory in source cluster contains 6982 directories and 10,887 files. Please see the result of the hdfs count command
[user@example ~]$ hdfs dfs -count /data
6982 10,887 11897305288 /data
&
[user@example~]$ hdfs dfs -ls -R /data | wc -l
17869
we had run replication(via distcp command line)maually, due to some space crunch on the remote server the distcp job was failed. then we run below command to check the hdfs count
[user@example tmp]$ hdfs dfs -count /data
6982 21756 11940958360 /data
[user@example tmp]$ hdfs dfs -ls -R /data | wc -l
17869
There was a devation in the file count before the operation,almost the file count increased double. However
ls -R result giving the actual count (6982 +10,887).
Ideally the output of hdfs dfs -count command should returns with 10,887 files and 6982 directories.
What could be the reason for this inconsistent result? We did restart the cluster suspecting some chache but despite that the counts mentioned above was consitent.
Thanks in advance,
Kathik
Created 08-11-2018 04:00 AM
Created 08-23-2018 04:47 AM
Hi, I think it is related to the snapshots or hidden directories. Maybe the distcp is preparing a snapshot, and as it failed, it left these temporary objects in HDFS.
Created 11-27-2018 08:22 PM
I encountred the same issue hdfs dfs -count return incorrect file count. The directory has 76 files but the -count report 77 files. The count CONTENT_SIZE match total sum of individual files in the directory.
I think it is a bug in the -count operation report incorrect file count.
Any comments from experts here?
$ hdfs dfs -count -v /PROJECTS/flume_data/dirname1/2018/11/27/12 DIR_COUNT FILE_COUNT CONTENT_SIZE PATHNAME 1 77 78855 /PROJECTS/flume_data/dirname1/2018/11/27/12 $ hdfs dfs -ls -R /PROJECTS/flume_data/dirname1/2018/11/27/12 | wc -l 76 $ hdfs dfs -du -s -x /PROJECTS/flume_data/dirname1/2018/11/27/12 78855 236565 /PROJECTS/flume_data/dirname1/2018/11/27/12 ##manual sum individual file size in the directory $ hdfs dfs -du -x /PROJECTS/flume_data/dirname1/2018/11/27/12 | awk {'print $1'} | sed 's/$/+/g' | tr -d '\n' | sed 's/$/0\n/' | bc 78855 $ hdfs dfs -du -x /PROJECTS/flume_data/dirname1/2018/11/27/12 | wc -l 76
Created on 12-06-2018 12:52 AM - edited 12-06-2018 12:57 AM
do you have HA configured for Namenode ? by any chance . Also you double check the results in Namenode UI
Created 01-10-2019 04:52 PM
Namenode UI will provide total files and directories is there a way we can see the number of files by directory using namenode UI?
Created 10-21-2019 08:37 AM
I had exactly the same issue and turned out that the count includes also snapshot. To check if that's the case one can add -x option in the count, e.g.:
hdfs dfs -count -v -h -x /user/hive/warehouse/my_schema.db/*