I m loading csv file into Hive orc table using data frame temporary table. After loading into Hive table data is present with double quote.
Input file
"Arpit","Jain",123
"Qwee","ffhh",5778
How to remove this double quote at time of inserting into Hive table which induce by csv format .
,
I m loading csv file to orc Hive table using data frame temporary table.
But in Hive table it's loaded with double quote.
How can I remove double quotes .
Input csv file in hdfs
"Arpit","Jain",1234,"India"
"ABC","abcd",7657,"India"
,
Created 11-15-2016 06:49 PM
You will need to use OpenCSVSerde: https://cwiki.apache.org/confluence/display/Hive/CSV+Serde
Just add this to your create table ddl (and use the appropriate delim for separator character)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde' WITH SERDEPROPERTIES ( "separatorChar" = ",", "quoteChar" = "\"" )
A limitation is that it stores all fields as string. See link above and this one: https://community.hortonworks.com/questions/56611/hive-ignoring-data-type-declarations-in-create-tab...
There are workarounds like loading using OpenCSVSerde into a temp table and then load that (Create table as select...) into an ORC table.
Alternatively, you could use pig to clean double quotes first and then load that data.
If this is what you were looking for, let me know by accepting the answer; else, let me know of any gaps.
Created 11-15-2016 06:49 PM
You will need to use OpenCSVSerde: https://cwiki.apache.org/confluence/display/Hive/CSV+Serde
Just add this to your create table ddl (and use the appropriate delim for separator character)
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde' WITH SERDEPROPERTIES ( "separatorChar" = ",", "quoteChar" = "\"" )
A limitation is that it stores all fields as string. See link above and this one: https://community.hortonworks.com/questions/56611/hive-ignoring-data-type-declarations-in-create-tab...
There are workarounds like loading using OpenCSVSerde into a temp table and then load that (Create table as select...) into an ORC table.
Alternatively, you could use pig to clean double quotes first and then load that data.
If this is what you were looking for, let me know by accepting the answer; else, let me know of any gaps.
Created 11-16-2016 10:30 PM
When you create table as select ... into ORC table don't forget the cast the proper data type to match your target table. Some of the fields may get converted implicitly, others not.
Created on 12-02-2018 03:09 PM - last edited on 12-21-2021 11:16 AM by ask_bill_brooks
doesn't work here, full script is as below:
CREATE TABLE sr.sr2013 (
creation_date STRING,
status STRING,
first_3_chars_of_postal_code STRING,
intersection_street_1 STRING,
intersection_street_2 STRING,
ward STRING,
service_request_type STRING,
division STRING,
section STRING )
ROW FORMAT DELIMITED FIELDS TERMINATED BY ','
WITH SERDEPROPERTIES (
'colelction.delim'='\u0002',
'mapkey.delim'='\u0003',
'serialization.format'=',',
'field.delim'=',',
'skip.header.line.count'='1',
'quoteChar'= "\"") ;
Created 12-02-2018 03:40 PM
Impala rejected the change of:
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'
Created 12-21-2021 08:26 AM