<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Revisited : Import to Hive or HDFS ? in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Revisited-Import-to-Hive-or-HDFS/m-p/117263#M34084</link>
    <description>&lt;P&gt;I don't see a reason for the first insert to be a text/uncompressed avro file. Using HCatalog, you can directly import from sqoop to hive table as ORC. That would save you a lot of space because of compression. &lt;/P&gt;&lt;P&gt;Once the initial data import is in Hive as ORC, you can then still continue and transform this data as necessary. If the reason for writing as text is to access from Pig and MR, a HCatalog table also can be accessed from Pig/MR. &lt;/P&gt;</description>
    <pubDate>Thu, 07 Jul 2016 19:20:14 GMT</pubDate>
    <dc:creator>ravi1</dc:creator>
    <dc:date>2016-07-07T19:20:14Z</dc:date>
    <item>
      <title>Revisited : Import to Hive or HDFS ?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Revisited-Import-to-Hive-or-HDFS/m-p/117262#M34083</link>
      <description>&lt;P&gt;HDP 2.4 installed using Ambari 2.2.2.0.&lt;/P&gt;&lt;P&gt;To my &lt;A target="_blank" href="https://community.hortonworks.com/questions/31863/import-to-hdfs-or-hive.html"&gt;previous question&lt;/A&gt;, I received comprehensive feedback from the community based on which I assumed that importing data from RDBMS to HDFS(text/Avro) and then create Hive external tables.&lt;/P&gt;&lt;P&gt;Then I realized that I have missed/misinterpreted something :&lt;/P&gt;&lt;OL&gt;
&lt;LI&gt;The ideas behind importing first to HDFS are :
&lt;OL&gt;
&lt;LI&gt;When stored on HDFS, Hive as well as other tools(Pig, MR) and external/third-party tools can access the files and process in their own ways&lt;/LI&gt;&lt;LI&gt;Sqoop cannot directly create the EXTERNAL tables, moreover, it is required that you load the data first onto the cluster and after some period, PARTITION the tables(&lt;STRONG&gt;when the db developers are available for business knowledge&lt;/STRONG&gt;)&lt;/LI&gt;&lt;/OL&gt;&lt;/LI&gt;&lt;LI&gt;A 1TB RDBMS is imported as text/Avro files onto HDFS, this will occupy approx. 3TB on the HDFS(given the replication factor of 3)&lt;/LI&gt;&lt;LI&gt;Creating a Hive EXTERNAL table is NOT going to consume much HDFS space, I created 'raw/plain' EXTERNAL tables that merely point to the imported files&lt;/LI&gt;&lt;LI&gt;&lt;STRONG&gt;&lt;U&gt;NOW the confusion begins&lt;/U&gt;&lt;/STRONG&gt; - I need to create EXTERNAL PARTITIONED tables from these 'raw/plain' tables. Now, the final EXTERNAL PARTITIONED tables will again occupy space, also due to the point 1.1, we CANNOT delete the original imported files. This will lead to more consumption of HDFS space, due to duplication of data&lt;/LI&gt;&lt;/OL&gt;&lt;P&gt;Are my fears justified ? If yes, how shall I proceed ? If not, what am I missing(say, &lt;STRONG&gt;HCatalog usage&lt;/STRONG&gt;) ?&lt;/P&gt;</description>
      <pubDate>Thu, 07 Jul 2016 16:20:22 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Revisited-Import-to-Hive-or-HDFS/m-p/117262#M34083</guid>
      <dc:creator>kaliyugantagoni</dc:creator>
      <dc:date>2016-07-07T16:20:22Z</dc:date>
    </item>
    <item>
      <title>Re: Revisited : Import to Hive or HDFS ?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Revisited-Import-to-Hive-or-HDFS/m-p/117263#M34084</link>
      <description>&lt;P&gt;I don't see a reason for the first insert to be a text/uncompressed avro file. Using HCatalog, you can directly import from sqoop to hive table as ORC. That would save you a lot of space because of compression. &lt;/P&gt;&lt;P&gt;Once the initial data import is in Hive as ORC, you can then still continue and transform this data as necessary. If the reason for writing as text is to access from Pig and MR, a HCatalog table also can be accessed from Pig/MR. &lt;/P&gt;</description>
      <pubDate>Thu, 07 Jul 2016 19:20:14 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Revisited-Import-to-Hive-or-HDFS/m-p/117263#M34084</guid>
      <dc:creator>ravi1</dc:creator>
      <dc:date>2016-07-07T19:20:14Z</dc:date>
    </item>
    <item>
      <title>Re: Revisited : Import to Hive or HDFS ?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Revisited-Import-to-Hive-or-HDFS/m-p/117264#M34085</link>
      <description>&lt;P&gt;Can you check if I have understood correctly :&lt;/P&gt;&lt;UL&gt;
&lt;LI&gt;Sqoop import(&lt;STRONG&gt;with &lt;/STRONG&gt;HCatalog integration) to Hive&lt;/LI&gt;&lt;LI&gt;Use HCatalog in case someone needs to access and process the data in Pig, MR etc. I came across the following paragraph in O'Reilly(and the same tone reflected in several posts on the Internet)&lt;/LI&gt;&lt;/UL&gt;&lt;BLOCKQUOTE&gt;A drawback of ORC as of this writing is that it was designed specifically for Hive, and
so is not a general-purpose storage format that can be used with non-Hive MapReduce
interfaces such as Pig or Java, or other query engines such as Impala. Work is
under way to address these shortcomings, though&lt;/BLOCKQUOTE&gt;&lt;P&gt;There will be several RDBMS schemas that will be &lt;STRONG&gt;&lt;U&gt;imported onto HDFS and LATER partitioned etc&lt;/U&gt;&lt;/STRONG&gt;. and processed. In this context, can you elaborate 'Once the initial data import is in Hive as ORC, you can then still continue and transform this data as necessary.'&lt;/P&gt;&lt;P&gt;I have the following questions :&lt;/P&gt;&lt;UL&gt;
&lt;LI&gt;Suppose Sqoop import to Hive is done &lt;STRONG&gt;WITHOUT partitions&lt;/STRONG&gt;(--hive-partition-key) i.e all tables are Hive '&lt;STRONG&gt;Managed&lt;/STRONG&gt;' tables and , say, this uses 800GB of HDFS space as compared to 1TB in the source RDBMS. The question now is won't more space be occupied when I try to create PARTITIONED tables?&lt;/LI&gt;&lt;LI&gt;Will it be possible for some third-party non-java tool to read the data by relying on HCatalog ?&lt;/LI&gt;&lt;/UL&gt;</description>
      <pubDate>Thu, 07 Jul 2016 19:50:40 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Revisited-Import-to-Hive-or-HDFS/m-p/117264#M34085</guid>
      <dc:creator>kaliyugantagoni</dc:creator>
      <dc:date>2016-07-07T19:50:40Z</dc:date>
    </item>
    <item>
      <title>Re: Revisited : Import to Hive or HDFS ?</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Revisited-Import-to-Hive-or-HDFS/m-p/117265#M34086</link>
      <description>&lt;P&gt;I am not sure what they mean by ORC not being a general purpose format. Anyway, in this case, you are still going through HCatalog (there are HCatalog APIs for MR and Pig). &lt;/P&gt;&lt;P&gt;When I said you can transform this data as necessary, I mean things like creating new Partitions, Buckets, Sorting, Bloom filters and even redesigning tables for better access. &lt;/P&gt;&lt;P&gt;There will be data duplication with any data transforms if you want to keep raw data as well.&lt;/P&gt;</description>
      <pubDate>Thu, 07 Jul 2016 23:24:17 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Revisited-Import-to-Hive-or-HDFS/m-p/117265#M34086</guid>
      <dc:creator>ravi1</dc:creator>
      <dc:date>2016-07-07T23:24:17Z</dc:date>
    </item>
  </channel>
</rss>

