Let's assume your HDFS had available a single HDFS block size of 128 MB. If you write 1MB file to that block, that block is not available for another write, as such while your used space is less than 1% practically , but your space available is 0 block and 0 bytes for new write. I hope you get the difference. That happens when you deal with large blocks. Smaller than block size files can lead to waste of space. Instead of using df to show bytes available or used, you should look for blocks used and available and eventually multiply that with block size.
I responded to a similar question last year. Let me find it.