<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Hive partition and bucketing in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-partition-and-bucketing/m-p/106002#M46427</link>
    <description>&lt;P&gt;This should be easy enough for you to test:&lt;/P&gt;&lt;P&gt;1. Insert values 1 to 40 for column user_id into table user_info_bucketed&lt;/P&gt;&lt;P&gt;2. Insert around 440 rows from 41 to 440&lt;/P&gt;&lt;P&gt;3. Ideally, each bucket should have about 19 rows, or around that&lt;/P&gt;&lt;P&gt;4. You can then check something like:&lt;/P&gt;&lt;PRE&gt;SELECT user_id,INPUT__FILE__NAME FROM user_info_bucketed WHERE user_id = 5;
SELECT user_id,INPUT__FILE__NAME FROM user_info_bucketed WHERE user_id = 50;
SELECT user_id,INPUT__FILE__NAME FROM user_info_bucketed WHERE user_id = 101;
SELECT user_id,INPUT__FILE__NAME FROM user_info_bucketed WHERE user_id = 160;
&lt;/PRE&gt;&lt;P&gt;OR you can check the physical location of the file on HDFS to determine the line count.&lt;/P&gt;</description>
    <pubDate>Wed, 16 Nov 2016 15:23:12 GMT</pubDate>
    <dc:creator>srai1</dc:creator>
    <dc:date>2016-11-16T15:23:12Z</dc:date>
    <item>
      <title>Hive partition and bucketing</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-partition-and-bucketing/m-p/106000#M46425</link>
      <description>&lt;P&gt;Hi geeks,&lt;/P&gt;&lt;P&gt;I am creatting hive table using below commands&lt;/P&gt;&lt;P&gt;CREATE TABLE user_info_bucketed(user_id BIGINT, firstname STRING, lastname STRING) PARTITIONED BY(timestamp STRING) CLUSTERED BY(user_id) INTO 25 BUCKETS;&lt;/P&gt;&lt;P&gt;on daily basis I am collecting records from mysql to pasting it to HDFS and creating partiton ( using add partition command ).&lt;/P&gt;&lt;P&gt;1. after creating patition dynamically , will 25 files create for bucketing ?&lt;/P&gt;&lt;P&gt;2. what will happen my unique user_id more than 25 (say 40 ) . How this will distribute to 25 buckets ?&lt;/P&gt;</description>
      <pubDate>Wed, 16 Nov 2016 15:07:45 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-partition-and-bucketing/m-p/106000#M46425</guid>
      <dc:creator>gobi_subramani</dc:creator>
      <dc:date>2016-11-16T15:07:45Z</dc:date>
    </item>
    <item>
      <title>Re: Hive partition and bucketing</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-partition-and-bucketing/m-p/106001#M46426</link>
      <description>&lt;P&gt;&lt;A rel="user" href="https://community.cloudera.com/users/12833/gobisubramani.html" nodeid="12833"&gt;@Gobi Subramani&lt;/A&gt;
&lt;/P&gt;&lt;P&gt;the bucket number is determined by the expression hash_function(bucketing_column) mod num_buckets.say for example if user_id (unique value 40)were an int, and there were 25 buckets, we would expect all user_id's that end in 0 to be in bucket 1, all user_id's that end in a 1 to be in bucket 2, etc.user_id 26 will go in bucket 1 and so on..&lt;/P&gt;</description>
      <pubDate>Wed, 16 Nov 2016 15:21:05 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-partition-and-bucketing/m-p/106001#M46426</guid>
      <dc:creator>rajkumar_singh</dc:creator>
      <dc:date>2016-11-16T15:21:05Z</dc:date>
    </item>
    <item>
      <title>Re: Hive partition and bucketing</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-partition-and-bucketing/m-p/106002#M46427</link>
      <description>&lt;P&gt;This should be easy enough for you to test:&lt;/P&gt;&lt;P&gt;1. Insert values 1 to 40 for column user_id into table user_info_bucketed&lt;/P&gt;&lt;P&gt;2. Insert around 440 rows from 41 to 440&lt;/P&gt;&lt;P&gt;3. Ideally, each bucket should have about 19 rows, or around that&lt;/P&gt;&lt;P&gt;4. You can then check something like:&lt;/P&gt;&lt;PRE&gt;SELECT user_id,INPUT__FILE__NAME FROM user_info_bucketed WHERE user_id = 5;
SELECT user_id,INPUT__FILE__NAME FROM user_info_bucketed WHERE user_id = 50;
SELECT user_id,INPUT__FILE__NAME FROM user_info_bucketed WHERE user_id = 101;
SELECT user_id,INPUT__FILE__NAME FROM user_info_bucketed WHERE user_id = 160;
&lt;/PRE&gt;&lt;P&gt;OR you can check the physical location of the file on HDFS to determine the line count.&lt;/P&gt;</description>
      <pubDate>Wed, 16 Nov 2016 15:23:12 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Hive-partition-and-bucketing/m-p/106002#M46427</guid>
      <dc:creator>srai1</dc:creator>
      <dc:date>2016-11-16T15:23:12Z</dc:date>
    </item>
  </channel>
</rss>

