<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: when QueryDatabaseTable  query the hug table  , cause the nifi/processor break down easily in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/when-QueryDatabaseTable-query-the-hug-table-cause-the-nifi/m-p/135913#M39679</link>
    <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;The recommended way is to use GenerateTableFetch as input processor of QueryDatabaseTable processor. It generates flow files with SQL query to execute. This way, it also allows you to balance the load if you are in a NiFi cluster. In this processor, you can set the partition size to limit the number of rows of each request.&lt;/P&gt;&lt;P&gt;The error you have suggests something different. Maybe the JDBC driver is not fully implemented and is not supporting the properties of this processor. But it does sound to me like a memory issue. Have a try with GenerateTableFetch.&lt;/P&gt;</description>
    <pubDate>Mon, 05 Sep 2016 13:33:46 GMT</pubDate>
    <dc:creator>pvillard</dc:creator>
    <dc:date>2016-09-05T13:33:46Z</dc:date>
    <item>
      <title>when QueryDatabaseTable  query the hug table  , cause the nifi/processor break down easily</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/when-QueryDatabaseTable-query-the-hug-table-cause-the-nifi/m-p/135912#M39678</link>
      <description>&lt;P&gt;I ask a question in another page&lt;/P&gt;&lt;P&gt;&lt;A href="https://community.hortonworks.com/questions/53941/nifi-querydatabasetables-properties-fetch-size-is.html#answer-form" target="_blank"&gt;https://community.hortonworks.com/questions/53941/nifi-querydatabasetables-properties-fetch-size-is.html#answer-form&lt;/A&gt;&lt;/P&gt;&lt;P&gt;But i am still puzzled.&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;In nifi-1.x, 'Max Rows Per Flow File' can control the max number in a flowfile.&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;But It seems that QueryDatabaseTable still query all the rows into the memory, the split the records into flowfiles. If the table is very hug. The nifi is easy to break down when start &lt;/STRONG&gt;&lt;STRONG&gt;QueryDatabaseTable.(database is mysql)&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;There is no rows limit in select query sql, like select * from table limit 0,100; -- mysql or select * from table where rownum&amp;lt;100;--oracle&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;The Bulletin advice me to modify '&lt;/STRONG&gt;max_allowed_packet' &lt;STRONG&gt;in my.cnf. But it is useless.&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;some of the errors:&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;Unable to execute SQL select query SELECT * FROM employee due to java.sql.SQLException: Unknown character set index for field '25700' received from server.: java.sql.SQLException: Unknown character set index for field '25700' received from server.&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;STRONG&gt;The &lt;/STRONG&gt;&lt;STRONG&gt;QueryDatabaseTable unable to use in database with hug table.&lt;/STRONG&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;STRONG&gt;Thanks for you reply&lt;/STRONG&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;STRONG&gt; David.&lt;/STRONG&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;@&lt;A href="https://community.hortonworks.com/users/363/bbende.html"&gt;Bryan Bende&lt;/A&gt; @&lt;A href="https://community.hortonworks.com/users/641/mburgess.html"&gt;Matt Burgess&lt;/A&gt;&lt;/STRONG&gt;&lt;/P&gt;</description>
      <pubDate>Mon, 05 Sep 2016 11:01:00 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/when-QueryDatabaseTable-query-the-hug-table-cause-the-nifi/m-p/135912#M39678</guid>
      <dc:creator>zhangweigang20</dc:creator>
      <dc:date>2016-09-05T11:01:00Z</dc:date>
    </item>
    <item>
      <title>Re: when QueryDatabaseTable  query the hug table  , cause the nifi/processor break down easily</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/when-QueryDatabaseTable-query-the-hug-table-cause-the-nifi/m-p/135913#M39679</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;The recommended way is to use GenerateTableFetch as input processor of QueryDatabaseTable processor. It generates flow files with SQL query to execute. This way, it also allows you to balance the load if you are in a NiFi cluster. In this processor, you can set the partition size to limit the number of rows of each request.&lt;/P&gt;&lt;P&gt;The error you have suggests something different. Maybe the JDBC driver is not fully implemented and is not supporting the properties of this processor. But it does sound to me like a memory issue. Have a try with GenerateTableFetch.&lt;/P&gt;</description>
      <pubDate>Mon, 05 Sep 2016 13:33:46 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/when-QueryDatabaseTable-query-the-hug-table-cause-the-nifi/m-p/135913#M39679</guid>
      <dc:creator>pvillard</dc:creator>
      <dc:date>2016-09-05T13:33:46Z</dc:date>
    </item>
    <item>
      <title>Re: when QueryDatabaseTable  query the hug table  , cause the nifi/processor break down easily</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/when-QueryDatabaseTable-query-the-hug-table-cause-the-nifi/m-p/135914#M39680</link>
      <description>&lt;P&gt;Thanks for your advice. I use GenerateTableFetch+ExecuteSQL to achieve my target.&lt;/P&gt;</description>
      <pubDate>Tue, 06 Sep 2016 08:18:19 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/when-QueryDatabaseTable-query-the-hug-table-cause-the-nifi/m-p/135914#M39680</guid>
      <dc:creator>zhangweigang20</dc:creator>
      <dc:date>2016-09-06T08:18:19Z</dc:date>
    </item>
  </channel>
</rss>

