Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

appendToFile: Failed to APPEND_FILE/hdfs/location/abc.csv for DFSClient_NONMAPREDUCE_-2077731895_1 on 127.0.0.1 because this file lease is currently owned by DFSClient_NONMAPREDUCE_171_1

avatar
Contributor

I am getting this error while appending a local file to hdfs file.

2 REPLIES 2

avatar

When a client wants to write an HDFS file, it must obtain a lease, which is essentially a lock, to ensure the single-writer semantics. If a lease is not explicitly renewed or the client holding it dies, then it will expire. When this happens, HDFS will close the file and release the lease on behalf of the client.

The lease manager maintains a soft limit (1 minute) and hard limit (1 hour) for the expiration time. If you wait the lease will be released and the append will work.

This being a work around the question is how did this situation come to be. Did a first process break? do you have storage quota enabled and writing on a maxed out directory?

avatar
New Contributor

Hi, I am trying to do the same append operation using a Java Client and facing the same issue whenever there is a client restart . Is there a way to remove existing lease before I do fileSystem.append() , I have also noticed that dfs.support.append property is set to 'true' .