When a client wants to write an HDFS file, it must obtain a lease, which is essentially a lock, to ensure the single-writer semantics. If a lease is not explicitly renewed or the client holding it dies, then it will expire. When this happens, HDFS will close the file and release the lease on behalf of the client.
The lease manager maintains a soft limit (1 minute) and hard limit (1 hour) for the expiration time. If you wait the lease will be released and the append will work.
This being a work around the question is how did this situation come to be. Did a first process break? do you have storage quota enabled and writing on a maxed out directory?
Hi, I am trying to do the same append operation using a Java Client and facing the same issue whenever there is a client restart . Is there a way to remove existing lease before I do fileSystem.append() , I have also noticed that dfs.support.append property is set to 'true' .