I was facing failed to replace bad datanode error while appending new data to file and the work around was to set dfs.replication
to less than 3 , so I set it to 1 just to test it. But I still got the
same error. I looked at the hadoop web interface and surprisingly the
replication factor was still 3 . but when I did hdfs dfs -setrep 1 <file_name> the replication is set to 1 and I could append to file. Why is this happening? Can I not set default replication factor ?
I tried formating namenode still no change.
@Saurab Dahal The replication factor is already set to 3 for the file you are trying to append data to. Even if the config value is changed, it gets into effect for new files. Please check the replication factor for a file created after the config value change.