- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
How to handle: Unable to close file because the last block does not have enough number of replicas
- Labels:
-
Apache Hadoop
Created 11-03-2017 11:49 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have a Java application that is appending recorded video into an HDFS file. Occasionally, after writing a batch of video frames, when I try to close the FSDataOutputStream I get the following error:
Unable to close file because the last block does not have enough number of replicas
In this case, I sleep for 100ms and try again and the close succeeds. However, the next time I try and open the file I get the following error:
Failed to APPEND_FILE /PSG/20171102.idx for DFSClient_NONMAPREDUCE_1265824578_479 on 192.168.3.224 because DFSClient_NONMAPREDUCE_1265824578_479 is already the current lease holder.
What is the proper way of handling a failed close attempt? Any ideas on how to handle such situations? Thanks, David
Created 11-27-2017 10:51 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
"Sleep and retry" is good way to handle the "not have enough number of replicas" problem.
For the "already the current lease holder" problem, you may call DistributedFileSystem.recoverLease(Path) to force lease recovery.
Hope it helps.
Created 11-27-2017 10:51 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
"Sleep and retry" is good way to handle the "not have enough number of replicas" problem.
For the "already the current lease holder" problem, you may call DistributedFileSystem.recoverLease(Path) to force lease recovery.
Hope it helps.
Created 11-28-2017 12:25 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
For recoverLease using CLI, see https://community.hortonworks.com/questions/146012/force-closing-a-hdfs-file-still-open-because-unco...