Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

How to handle: Unable to close file because the last block does not have enough number of replicas

avatar
Contributor

I have a Java application that is appending recorded video into an HDFS file. Occasionally, after writing a batch of video frames, when I try to close the FSDataOutputStream I get the following error:

Unable to close file because the last block does not have enough number of replicas

In this case, I sleep for 100ms and try again and the close succeeds. However, the next time I try and open the file I get the following error:

Failed to APPEND_FILE /PSG/20171102.idx for DFSClient_NONMAPREDUCE_1265824578_479 on 192.168.3.224 because DFSClient_NONMAPREDUCE_1265824578_479 is already the current lease holder.

What is the proper way of handling a failed close attempt? Any ideas on how to handle such situations? Thanks, David

1 ACCEPTED SOLUTION

avatar
Rising Star

"Sleep and retry" is good way to handle the "not have enough number of replicas" problem.

For the "already the current lease holder" problem, you may call DistributedFileSystem.recoverLease(Path) to force lease recovery.

Hope it helps.

View solution in original post

2 REPLIES 2

avatar
Rising Star

"Sleep and retry" is good way to handle the "not have enough number of replicas" problem.

For the "already the current lease holder" problem, you may call DistributedFileSystem.recoverLease(Path) to force lease recovery.

Hope it helps.

avatar
Rising Star