<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question How  Nifi handles huge  1 TB   files ? in Support Questions</title>
    <link>https://community.cloudera.com/t5/Support-Questions/How-Nifi-handles-huge-1-TB-files/m-p/413169#M253910</link>
    <description>&lt;P&gt;Hi&lt;/P&gt;&lt;P&gt;I wanted to look into a need to transfer 1 TB files by using chunking in Nifi.&lt;/P&gt;&lt;P&gt;Each file also has to have its 20 items of meta data associated with it remain intact so the metadata and the data both survive the breaking up of the file ( chunking ) into 1000 chunks and re-assembling the file at the destination ( de-chunking ). Also, is the meta data for the large file duplicated onto each of the file 1000 chunks or is it a sub-set of the meta data?&lt;/P&gt;&lt;P&gt;Someone mentioned nifi passes the file chunks data through JVM memory on its way to the content repository.&lt;/P&gt;&lt;P&gt;Can I confirm whether file chunks pass through JVM memory as they are written to the file/content repository for a large file ( or any file for that matter ?) - I was fairly sure they aren't, otherwise the JVM size ( limited by machine RAM ) on the machine would limit reading in of large file data, and that would limit large file transfer speed - is that correct?&lt;/P&gt;&lt;P&gt;I'm trying to confirm my understanding of how Nifi handles these large files please.&lt;/P&gt;&lt;P&gt;Any help appreciated.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Thu, 18 Dec 2025 03:28:12 GMT</pubDate>
    <dc:creator>zzzz77</dc:creator>
    <dc:date>2025-12-18T03:28:12Z</dc:date>
    <item>
      <title>How  Nifi handles huge  1 TB   files ?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/How-Nifi-handles-huge-1-TB-files/m-p/413169#M253910</link>
      <description>&lt;P&gt;Hi&lt;/P&gt;&lt;P&gt;I wanted to look into a need to transfer 1 TB files by using chunking in Nifi.&lt;/P&gt;&lt;P&gt;Each file also has to have its 20 items of meta data associated with it remain intact so the metadata and the data both survive the breaking up of the file ( chunking ) into 1000 chunks and re-assembling the file at the destination ( de-chunking ). Also, is the meta data for the large file duplicated onto each of the file 1000 chunks or is it a sub-set of the meta data?&lt;/P&gt;&lt;P&gt;Someone mentioned nifi passes the file chunks data through JVM memory on its way to the content repository.&lt;/P&gt;&lt;P&gt;Can I confirm whether file chunks pass through JVM memory as they are written to the file/content repository for a large file ( or any file for that matter ?) - I was fairly sure they aren't, otherwise the JVM size ( limited by machine RAM ) on the machine would limit reading in of large file data, and that would limit large file transfer speed - is that correct?&lt;/P&gt;&lt;P&gt;I'm trying to confirm my understanding of how Nifi handles these large files please.&lt;/P&gt;&lt;P&gt;Any help appreciated.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 18 Dec 2025 03:28:12 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/How-Nifi-handles-huge-1-TB-files/m-p/413169#M253910</guid>
      <dc:creator>zzzz77</dc:creator>
      <dc:date>2025-12-18T03:28:12Z</dc:date>
    </item>
    <item>
      <title>Re: How  Nifi handles huge  1 TB   files ?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/How-Nifi-handles-huge-1-TB-files/m-p/413448#M254073</link>
      <description>&lt;P&gt;Hello&amp;nbsp;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/136792"&gt;@zzzz77&lt;/a&gt;,&amp;nbsp;&lt;/P&gt;&lt;P&gt;Glad to have you on the community.&amp;nbsp;&lt;/P&gt;&lt;P&gt;What you are asking should be done with this kind of flow:&amp;nbsp;&lt;BR /&gt;GetFile → &lt;A href="https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.23.2/org.apache.nifi.processors.standard.SplitContent/" target="_self"&gt;SplitContent&lt;/A&gt; → Transfer → &lt;A href="https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-standard-nar/1.23.2/org.apache.nifi.processors.standard.MergeContent/" target="_self"&gt;MergeContent&lt;/A&gt; → PutFile&lt;/P&gt;&lt;P&gt;The SplitContent will split the file and the attributes will be get duplicated, because they are saved on the FlowFile, not on the content.&amp;nbsp;&lt;BR /&gt;More attributes will be added for the fragmentation part.&amp;nbsp;&lt;/P&gt;&lt;P&gt;The MergeContent will rebuild the content and the original attributes properly.&amp;nbsp;&lt;BR /&gt;So the metadata will not be lost.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 30 Jan 2026 20:48:35 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/How-Nifi-handles-huge-1-TB-files/m-p/413448#M254073</guid>
      <dc:creator>vafs</dc:creator>
      <dc:date>2026-01-30T20:48:35Z</dc:date>
    </item>
    <item>
      <title>Re: How  Nifi handles huge  1 TB   files ?</title>
      <link>https://community.cloudera.com/t5/Support-Questions/How-Nifi-handles-huge-1-TB-files/m-p/413477#M254076</link>
      <description>&lt;P&gt;&lt;a href="https://community.cloudera.com/t5/user/viewprofilepage/user-id/136792"&gt;@zzzz77&lt;/a&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;FlowFile Metadata/attributes are held in NiFi Heap memory.&amp;nbsp; For queued FlowFiles, there is a configurable swap threshold in the nifi.properties that will swap batches of 10,000 FlowFIle's worth for metadata/attributes to disk when the threshold is met.&amp;nbsp; This swapping is there to minimize excessive heap usage when queues grow large.&amp;nbsp; The NiFi Content is not held in heap memory; however, &lt;STRONG&gt;some&lt;/STRONG&gt; processor may need to read the content into heap memory for the processor to perform it's function.&amp;nbsp; &amp;nbsp; You will notice if you look at the individual components documentation that a "&lt;STRONG&gt;System Resource Considerations&lt;/STRONG&gt;" section exists.&amp;nbsp; If Heap memory usage is a concern for that processor, it will be documented there.&amp;nbsp;&lt;BR /&gt;&lt;BR /&gt;SplitContent processor docs example:&lt;/P&gt;&lt;P&gt;&lt;span class="lia-inline-image-display-wrapper lia-image-align-inline" image-alt="MattWho_0-1770047305065.png" style="width: 698px;"&gt;&lt;img src="https://community.cloudera.com/t5/image/serverpage/image-id/46606i233C04704950F16F/image-dimensions/698x143?v=v2" width="698" height="143" role="button" title="MattWho_0-1770047305065.png" alt="MattWho_0-1770047305065.png" /&gt;&lt;/span&gt;&lt;/P&gt;&lt;P&gt;Processors like SplitContent will hold the all the FlowFile metadata/attributes (not content) for every split FlowFIle being produced in heap memory until all the output FlowFiles have been produced and committed to the downstream connection.&amp;nbsp; These FlowFiles being produced can not be swapped to disk until they committed to the downstream connection.&amp;nbsp; So if a splitContent were to produce 50,000 split FlowFiles, the attributes for all 50,000 would be held in heap.&amp;nbsp; After committed to the downstream connection. 40,000 of those would get swapped to disk based on default swap thresholds. So heap impact would spike but not persist.&amp;nbsp;&lt;/P&gt;&lt;P&gt;Since you have not shared the specific of your dataflow in question (which processors you are using), I can't provide any specific feedback.&amp;nbsp; &amp;nbsp;Where is the chunking and de-chunking happening?&amp;nbsp; Sounds like this may be happening at source and at destination. NiFi is just moving these chunks from source to destination.&amp;nbsp; How are you sending the chunks to NiFi and transferring them to destination?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please help our community grow. If you found&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;&lt;STRONG&gt;any&lt;/STRONG&gt;&lt;SPAN&gt;&amp;nbsp;&lt;/SPAN&gt;of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "&lt;SPAN&gt;&lt;EM&gt;&lt;STRONG&gt;&lt;FONT color="#FF0000"&gt;Accept as Solution&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/EM&gt;" on&amp;nbsp;&lt;STRONG&gt;one or more&lt;/STRONG&gt;&amp;nbsp;of them that helped.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Thank you,&lt;BR /&gt;Matt&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 02 Feb 2026 15:57:19 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Support-Questions/How-Nifi-handles-huge-1-TB-files/m-p/413477#M254076</guid>
      <dc:creator>MattWho</dc:creator>
      <dc:date>2026-02-02T15:57:19Z</dc:date>
    </item>
  </channel>
</rss>

