I want to test the practical scenario that what if one of datanode is full in HDFS. For that I want to reduce the size of datanode so that I can fill it quickly.
Right now HDFS shows me the size of 96 GB. So thinking to make it 6 to 7 GB temporarily.
I guess you can set a quota limit using the command:
hadoop dfsadmin -setSpaceQuota <max_size> <directory>
For more information you can refer:
Regards, Karthik Gopal
I thought there might be some property exist in HDFS which will do this.Anyway I'll try this way.
This applies a quota to a directory in HDFS. It does not enforce anything at the layer of the DataNode's interaction with the local file system, so it cannot be used to simulate the scenario of a full DataNode.
Or just fill up the file system with some dummy non HDFS data. HDFS monitors free space on the datanode
Hello @Viraj Vekaria. I have 2 ideas for you:
<property> <name>dfs.datanode.du.reserved</name> <value>0</value> <description>Reserved space in bytes per volume. Always leave this much space free for non dfs use. </description> </property>