<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: SQOOP HANA to HIVE ORC in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/SQOOP-HANA-to-HIVE-ORC/m-p/97853#M11351</link>
    <description>&lt;P&gt;Yes, that solved the problem. Thanks!&lt;/P&gt;</description>
    <pubDate>Sat, 05 Dec 2015 00:40:22 GMT</pubDate>
    <dc:creator>vjain</dc:creator>
    <dc:date>2015-12-05T00:40:22Z</dc:date>
    <item>
      <title>SQOOP HANA to HIVE ORC</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/SQOOP-HANA-to-HIVE-ORC/m-p/97851#M11349</link>
      <description>&lt;P&gt;I am attempting to use SQOOP on a HANA tables of size 180 TB (compressed, 800TB on disk) into a HIVE table. When I pass LIMIT in query argument, the number of rows I get is 4 times the amount passed as LIMIT. So 250 LIMIT fetched 1000 rows. And they are not duplicated. &lt;/P&gt;&lt;P&gt;Another issue I am facing is with fetch-size. When I pass the fetch size, the process errors out with the message, "Search Limit exceeded"&lt;/P&gt;</description>
      <pubDate>Fri, 04 Dec 2015 04:58:29 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/SQOOP-HANA-to-HIVE-ORC/m-p/97851#M11349</guid>
      <dc:creator>vjain</dc:creator>
      <dc:date>2015-12-04T04:58:29Z</dc:date>
    </item>
    <item>
      <title>Re: SQOOP HANA to HIVE ORC</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/SQOOP-HANA-to-HIVE-ORC/m-p/97852#M11350</link>
      <description>&lt;A rel="user" href="https://community.cloudera.com/users/420/vjain.html" nodeid="420"&gt;@Vedant Jain&lt;/A&gt;&lt;P&gt;sqoop uses 4 mappers by  default. Try running with option -m 1 or any other number to see if it makes the difference. &lt;/P&gt;&lt;P&gt;Copying following line from &lt;A target="_blank" href="http://stackoverflow.com/questions/18586961/sqoop-with-sql-server-retrieving-more-records"&gt;this&lt;/A&gt; as it does make sense.  &lt;/P&gt;&lt;P&gt;Using the "top x" or "limit x" clauses do not make much sense with Sqoop as it can return different values on each query execution (there is no "order by"). Also in addition the clause will very likely confuse split generation, ending with not that easily deterministic outputs. Having said that I would recommend you to use only 1 mapper (-m 1 or --num-mappers 1) in case that you need to import predefined number of rows &lt;/P&gt;</description>
      <pubDate>Fri, 04 Dec 2015 09:03:23 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/SQOOP-HANA-to-HIVE-ORC/m-p/97852#M11350</guid>
      <dc:creator>nsabharwal</dc:creator>
      <dc:date>2015-12-04T09:03:23Z</dc:date>
    </item>
    <item>
      <title>Re: SQOOP HANA to HIVE ORC</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/SQOOP-HANA-to-HIVE-ORC/m-p/97853#M11351</link>
      <description>&lt;P&gt;Yes, that solved the problem. Thanks!&lt;/P&gt;</description>
      <pubDate>Sat, 05 Dec 2015 00:40:22 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/SQOOP-HANA-to-HIVE-ORC/m-p/97853#M11351</guid>
      <dc:creator>vjain</dc:creator>
      <dc:date>2015-12-05T00:40:22Z</dc:date>
    </item>
  </channel>
</rss>

