Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

What happens, when client try to read from file already opened for writing in HDFS?

avatar
Explorer
 
2 REPLIES 2

avatar
Super Collaborator

@Malay Sharma

If you are writing a new file to HDFS and trying to read from it at the same time, your read operation will fail with a 'File does not exist' error message until the file write is complete.

If you are writing to a file via the 'appendToFile' command and try to read it mid-write, your command will wait until the file is updated and then read the new version of it.

In the case of tail, it will stream out the entire contents that you are appending instead of only the last few lines.

avatar
Expert Contributor

When Client try to read(SELECT) on which all ready open to Write(INSERT/DELETE)

1)Client request for READ a block from HDFS file system.

2) That given block is all ready open for write and it wait till WRITE Operation complete, because its Start/End block ID will change during Write, hence Client read wait till complete.

3) Client wait till "dfs.client.failover.max.attempts" in HDFS-SITE.xml , Ex:- 10 attempt , it try for 10 attempt to read the operation , mean time if Write HDFS complete , client will read & complete.

4) if Client not able to read within max "dfs.client.failover.max.attempts" attempt Client read request will fail.