Member since
05-02-2017
360
Posts
65
Kudos Received
22
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
13349 | 02-20-2018 12:33 PM | |
1506 | 02-19-2018 05:12 AM | |
1862 | 12-28-2017 06:13 AM | |
7140 | 09-28-2017 09:25 AM | |
12180 | 09-25-2017 11:19 AM |
09-12-2017
08:47 AM
Hi @RANKESH NATH Hive table with ACID property enabled table will work good for your use case. Also with the help of Merge update you will be able to perform these inserts/updates easily. In typing data warehousing implementing SCD type 2 tables are performed easily with the help of Merge update function in hive. To answer whether spark in terms of insert/updates you can complex computation with much ease. But when it comes to updating the existing record its better to go with Hive than spark. Hope it Helps!!
... View more
08-25-2017
05:50 AM
Hi @Vivekanandan Gunasekaran Log states that there is already file contained in the place where you are trying to create a hive table. You may need to delete that file and try to re-create it or create the new hive table pointing to different location. Hope It Help!!
... View more
08-18-2017
01:54 PM
1 Kudo
@Saurab Dahal Yes its achievable. But there are few tweeks which has to be done. Partitioned table should be created with additional field("month") along with sale_date. Create the hive table with month as partitioned column. When inserting into the table, extract only the month from sales_date and pass it to insert statement. Insert into table table_name partitioned(month) select col1,col2,MONTH(sales_date),sale_date from source_table; Above command should work. Make sure the below property is enabled. set hive.exec.dynamic.partition=true; set hive.exec.dynamic.partition.mode=nonstrict Hope It helps!
... View more
08-17-2017
02:19 PM
@palgaba You mean without major issue?
... View more
08-17-2017
02:17 PM
@pavan p If it answers you question please choose it as a best answer!
... View more
08-17-2017
01:44 PM
@Mallik Sunkara Are you able to query the table before altering the column name? I believe that you are facing this issue only after you have renamed the column name? Is my understanding is right?
... View more
08-16-2017
12:56 PM
Thanks @Shawn Weeks
... View more
08-16-2017
07:57 AM
@Greg Keys Thanks. Actually I have implemented already like the way which you have mentioned. But following this is easier for files with less no of columns. But I wanted to build a job which should be easier to handle even if we have large no of columns. I such case I came across multi-delimiter & regex. Wanted to know how it can be implemented.
... View more
08-14-2017
01:46 PM
@Luis Ruiz Are you trying to insert from one table to another and facing this issue? If that is the case then when you are inserting from one table to another then you may need to convert the columns into MAP as you have used MAP in the second table.
... View more
08-14-2017
09:52 AM
1 Kudo
I have a file which has data like follows: provider_specialty,provider_specialty_desc 1,General,Practice|third column 2,General Surgery|third column 3,Allergy/Immunology|third column 4,Otolaryngology|third column 5,Anesthesiology|third column 6,Cardiology|third column 7,Dermatology|third column 8,Family Practice|third column 9,Interventional Pain Management|third column It has 3 columns in which first column and second column is separated by ','. Second column and third column is separated by '|'. Also few values in the second columns has delimiter as value. Is it possible to attain this in hive using multidelimiter. Could anyone help in providing the logic to create the table for the data provided above.
... View more
Labels:
- Labels:
-
Apache Hadoop
-
Apache Hive