- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
appendToFile: Failed to APPEND_FILE/hdfs/location/abc.csv for DFSClient_NONMAPREDUCE_-2077731895_1 on 127.0.0.1 because this file lease is currently owned by DFSClient_NONMAPREDUCE_171_1
- Labels:
-
Apache Hadoop
Created 09-24-2016 09:06 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am getting this error while appending a local file to hdfs file.
Created 09-26-2016 08:20 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
When a client wants to write an HDFS file, it must obtain a lease, which is essentially a lock, to ensure the single-writer semantics. If a lease is not explicitly renewed or the client holding it dies, then it will expire. When this happens, HDFS will close the file and release the lease on behalf of the client.
The lease manager maintains a soft limit (1 minute) and hard limit (1 hour) for the expiration time. If you wait the lease will be released and the append will work.
This being a work around the question is how did this situation come to be. Did a first process break? do you have storage quota enabled and writing on a maxed out directory?
Created 12-19-2016 06:45 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi, I am trying to do the same append operation using a Java Client and facing the same issue whenever there is a client restart . Is there a way to remove existing lease before I do fileSystem.append() , I have also noticed that dfs.support.append property is set to 'true' .