Created 06-29-2017 02:01 PM
I was facing failed to replace bad datanode error while appending new data to file and the work around was to set dfs.replication
to less than 3 , so I set it to 1 just to test it. But I still got the
same error. I looked at the hadoop web interface and surprisingly the
replication factor was still 3 . but when I did hdfs dfs -setrep 1 <file_name> the replication is set to 1 and I could append to file. Why is this happening? Can I not set default replication factor ?
I tried formating namenode still no change.
Here's my hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
<property>
<name>dfs.support.append</name>
<value>true</value>
</property>
</configuration>
I tried to follow steps from this question, still my replication factor is 3. I am running hadoop in single node cluster.