<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Force closing a HDFS file still open (because uncorrectly copied) in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Force-closing-a-HDFS-file-still-open-because-uncorrectly/m-p/180647#M70638</link>
    <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;We have a file which we can't backup by distcp due to a mismatch lenght. After a fsck on this file I can see it is still open, but after speaking with file owner it was not. Searching on application logs show us this message:&lt;/P&gt;&lt;PRE&gt;2017-10-31 18:28:13.466 INFO  [pr-8243-exec-12][o.a.hadoop.hdfs.DFSClient] Unable to close file because dfsclient  was unable to contact the HDFS servers. clientRunning false hdfsTimeout -1
2017-10-31 18:28:13.468 ERROR [pr-8243-exec-12][r.c.a.LogExceptionHandler] error while processing request - uri: /xxx/yyy- query string: [zzzzzzzzzzzzz] - exception: java.io.IOException: Unable to close file because dfsclient  was unable to contact the HDFS servers. clientRunning false hdfsTimeout -1
com.aaaa.bbb.ccc.ddd.exception.AmethystRuntimeException: java.io.IOException: Unable to close file because dfsclient  was unable to contact the HDFS servers. clientRunning false hdfsTimeout -1

&lt;/PRE&gt;&lt;P&gt;We restart HDFS service at this hour, so we think it's the main reason of this problem.&lt;/P&gt;&lt;P&gt;Copying the file by  a "hdfs dfs -cp" create a new file correctly closed, so we probably can replace the unclosed file. Nevertheless I want to know if there is another simple method to directly close a file still opened from a HDFS point of vue?&lt;/P&gt;&lt;P&gt;Thanks for your help&lt;/P&gt;</description>
    <pubDate>Thu, 02 Nov 2017 22:45:15 GMT</pubDate>
    <dc:creator>Micael</dc:creator>
    <dc:date>2017-11-02T22:45:15Z</dc:date>
    <item>
      <title>Force closing a HDFS file still open (because uncorrectly copied)</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Force-closing-a-HDFS-file-still-open-because-uncorrectly/m-p/180647#M70638</link>
      <description>&lt;P&gt;Hello,&lt;/P&gt;&lt;P&gt;We have a file which we can't backup by distcp due to a mismatch lenght. After a fsck on this file I can see it is still open, but after speaking with file owner it was not. Searching on application logs show us this message:&lt;/P&gt;&lt;PRE&gt;2017-10-31 18:28:13.466 INFO  [pr-8243-exec-12][o.a.hadoop.hdfs.DFSClient] Unable to close file because dfsclient  was unable to contact the HDFS servers. clientRunning false hdfsTimeout -1
2017-10-31 18:28:13.468 ERROR [pr-8243-exec-12][r.c.a.LogExceptionHandler] error while processing request - uri: /xxx/yyy- query string: [zzzzzzzzzzzzz] - exception: java.io.IOException: Unable to close file because dfsclient  was unable to contact the HDFS servers. clientRunning false hdfsTimeout -1
com.aaaa.bbb.ccc.ddd.exception.AmethystRuntimeException: java.io.IOException: Unable to close file because dfsclient  was unable to contact the HDFS servers. clientRunning false hdfsTimeout -1

&lt;/PRE&gt;&lt;P&gt;We restart HDFS service at this hour, so we think it's the main reason of this problem.&lt;/P&gt;&lt;P&gt;Copying the file by  a "hdfs dfs -cp" create a new file correctly closed, so we probably can replace the unclosed file. Nevertheless I want to know if there is another simple method to directly close a file still opened from a HDFS point of vue?&lt;/P&gt;&lt;P&gt;Thanks for your help&lt;/P&gt;</description>
      <pubDate>Thu, 02 Nov 2017 22:45:15 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Force-closing-a-HDFS-file-still-open-because-uncorrectly/m-p/180647#M70638</guid>
      <dc:creator>Micael</dc:creator>
      <dc:date>2017-11-02T22:45:15Z</dc:date>
    </item>
    <item>
      <title>Re: Force closing a HDFS file still open (because uncorrectly copied)</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Force-closing-a-HDFS-file-still-open-because-uncorrectly/m-p/180648#M70639</link>
      <description>&lt;P&gt;hdfs debug recoverLease command should be used to close the file. The complete syntax is like below:&lt;/P&gt;&lt;PRE&gt;hdfs debug recoverLease -path &amp;lt;path-of-the-file&amp;gt;[-retries &amp;lt;retry-times&amp;gt;]&lt;/PRE&gt;</description>
      <pubDate>Fri, 03 Nov 2017 01:15:42 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Force-closing-a-HDFS-file-still-open-because-uncorrectly/m-p/180648#M70639</guid>
      <dc:creator>xyao</dc:creator>
      <dc:date>2017-11-03T01:15:42Z</dc:date>
    </item>
    <item>
      <title>Re: Force closing a HDFS file still open (because uncorrectly copied)</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Force-closing-a-HDFS-file-still-open-because-uncorrectly/m-p/180649#M70640</link>
      <description>&lt;P&gt;Thank you &lt;A rel="user" href="https://community.cloudera.com/users/289/xyao.html" nodeid="289"&gt;@Xiaoyu Yao&lt;/A&gt; it works!&lt;/P&gt;</description>
      <pubDate>Fri, 03 Nov 2017 15:37:40 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Force-closing-a-HDFS-file-still-open-because-uncorrectly/m-p/180649#M70640</guid>
      <dc:creator>Micael</dc:creator>
      <dc:date>2017-11-03T15:37:40Z</dc:date>
    </item>
  </channel>
</rss>

