Member since
03-23-2015
1288
Posts
114
Kudos Received
98
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4347 | 06-11-2020 02:45 PM | |
5950 | 05-01-2020 12:23 AM | |
3776 | 04-21-2020 03:38 PM | |
4046 | 04-14-2020 12:26 AM | |
3027 | 02-27-2020 05:51 PM |
05-07-2019
09:15 PM
1 Kudo
I don't see super easy way, but below might help: 1. if this table is not external, make it external first: ALTER TABLE <table> SET TBLPROPERTIES('EXTERNAL'='TRUE') 2. drop the table 3. create a new table and define same partition column 4. manually move files in another 4 partition directories into the 6 partition directories that you want to keep evenly 5. remove the empty directories under the table 6. msck repair table to recover the 6 current partitions Hope above can help. Eric
... View more
04-12-2019
08:08 PM
Hi, On top of what Gomath has suggested, I also have another idea: 1. assuming that your current data on HDFS is /user/hive/wareshouse/dbname/tablename/columnA=1 /user/hive/wareshouse/dbname/tablename/columnA=2 /user/hive/wareshouse/dbname/tablename/columnA=3 /user/hive/wareshouse/dbname/tablename/columnA=4 .... and you want it to become: /user/hive/wareshouse/dbname/tablename/columnA=1/columnB=1 /user/hive/wareshouse/dbname/tablename/columnA=2/columnB=1 /user/hive/wareshouse/dbname/tablename/columnA=3//columnB=1 /user/hive/wareshouse/dbname/tablename/columnA=4/columnB=1 ... 2. create a new table that has the same columns, but with extra partition columnB, assuming that you will create under HDFS in /user/hive/wareshouse/dbname/new_tablename 3. you can create a new directory with below structure: /user/hive/wareshouse/dbname/new_tablename/columnA=1 /user/hive/wareshouse/dbname/new_tablename/columnA=2 /user/hive/wareshouse/dbname/new_tablename/columnA=3 /user/hive/wareshouse/dbname/new_tablename/columnA=4 ... 4. create the top level partition directories for the new table hadoop fs -mkdir /user/hive/wareshouse/dbname/new_tablename/columnA=1 hadoop fs -mkdir /user/hive/wareshouse/dbname/new_tablename/columnA=2 hadoop fs -mkdir /user/hive/wareshouse/dbname/new_tablename/columnA=3 hadoop fs -mkdir /user/hive/wareshouse/dbname/new_tablename/columnA=4 5. and then MOVE data from old table into new table's partition: hadoop fs -mv /user/hive/wareshouse/dbname/tablename/columnA=1 /user/hive/wareshouse/dbname/new_tablename/columnA=1/columnB=1 hadoop fs -mv /user/hive/wareshouse/dbname/tablename/columnA=2 /user/hive/wareshouse/dbname/new_tablename/columnA=2/columnB=1 hadoop fs -mv /user/hive/wareshouse/dbname/tablename/columnA=3 /user/hive/wareshouse/dbname/new_tablename/columnA=3/columnB=1 hadoop fs -mv /user/hive/wareshouse/dbname/tablename/columnA=4 /user/hive/wareshouse/dbname/new_tablename/columnA=4/columnB=1 .... What's the value for columnB you will decide, since it is a new partition. 6. get back to hive and then run below command: MSCK REPAIR TABLE dbname.new_tablename; This will help to recover all the missing partitions 7. Run: SELECT * FROM dbname.new_tablename to confirm if data is correct. This way, you do not need to duplicate any data on HDFS. Hope above helps.
... View more
04-12-2019
07:32 PM
1 Kudo
Hi, I would suggest to use INT rather than STRING. Firstly, searching based INT type is faster, and secondly, like you said, you can do numeric comparison, which will be different from the STRING type comparison.
... View more
04-01-2019
12:36 AM
note that on CDH 5.9.1 there's also a hardcoded size limit in Hive, you also need to upgrade to later CDH version Hive on CDH 5.13.3 is not checking for hardcoded size limit of 4000 characters
... View more
03-31-2019
06:50 PM
Thank you for testing. Have you ever submitted a workflow? Workflow still dosen't work. Look at this. first) saved HQL document.It contained test parameter '76710'. http://demo.gethue.com/hue/editor?editor=292133 hql docuement second) click the "submit" button. I setted '92801' value of parameter. and passed it. param But HQL results' zipcode is '76710'. The results' will be empty, If I save the HQL document no parameter value. I have tested many case. I think that problem is HQL because It haven't variable. please look at this capture. This capture is oozie log. hql Oozie runed hql query no have variable "$zip". It is very necessary function for me. Please help me to find the soluction.
... View more
03-31-2019
12:42 AM
1 Kudo
Hi, I assume that you work on the managed table instead of external table? This could be because of lack of permissions from the user who tried to run the DROP command to remove the underlining HDFS path. Check HMS server log to see if you can find any error messages.
... View more
03-25-2019
08:06 AM
Hi, So how do you scale Hive Metastore when you need to support more than 80 active connections? According to Cloudera's documentation they only mention supporting up to 80 connections which needs 16-24GB. Does passing 80 connection requires to continue extending the Heap Memory? How big can it be since 24GB is already too big isn't it? Attaching Cloudera's memory setup recommendations: https://www.cloudera.com/documentation/enterprise/5-15-x/topics/cdh_ig_hiveserver2_configure.html Best, Eyal
... View more
03-16-2019
07:37 PM
It should be OK because MIT Kerberos client is running fine. [cloudera@~]$ telnet quickstart.cloudera 88
Trying 10.10.10.190...
Connected to quickstart.cloudera.
Escape character is '^]'.
^]
... View more
03-07-2019
02:54 PM
Hi Thank you for your response. I am running it for one account number select distinct msg_bet_id , atchd_doc_ref_id , idv_cd as Resident_Country_Code , idv_tin.tin as TIN , idv_tin.tin_issg_cntry_cd as TIN_Country_Code FROM tt17.crs_account_report a1 lateral view outer explode(hldr_idv_rsdnt_cntry_cds)exp1 as idv_cd lateral view outer explode(hldr_idv_tins)exp2 as idv_tin where accountnumber=123
... View more
02-09-2019
04:47 AM
Hi @Tim Armstrong While IMPALA-1618 steel open and unresolved, I confirmed that this "workaround" is safe and efficient (I'm using it on a large scope and during more than 9 months) so that this is the only solution I find to solve or -get around- this big problem. Hope that the main problem will be fixed ASAP. Thanks for the remark.
... View more