11-09-2017 01:23 AM
I am writing a Java application which needs support to overwrite a specific HDFS block - which may or may not be corrupt.
It looks like org.apache.hadoop.hdfs.server.datanode.BlockSender might do the trick but I think this needs to be called from a DataNode.
Is it best to mark the original block(s) corrupt and create a new one with the same name on a different live DataNode?
Can anybody point me in the right direction?