Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Creating an Impala External Table from fixed width csv

avatar
Expert Contributor

Here is the code.

create external table testtable1 
(code string, codesystem string)
ROW FORMAT SERDE 'org.apache.hadoop.hive.contrib.serde2.RegexSerDe'
WITH SERDEPROPERTIES (
     "input.regex" = "(.{27)(.{50})"
     )
LOCATION '/data/raw/testtable1';

The error message is:

ParseException: Syntax error in line 3:undefined: ROW FORMAT SERDE 'org.apache.hadoop.hiv... ^ Encountered: IDENTIFIER Expected: DELIMITED CAUSED BY: Exception: Syntax error

It looks like Impala table only accepts "Row Format Delimited".

Then how can I create an hive table with fixed width layout? Should I just do it outside Impala, bu through Hive, and then do other data operation on this table via Impala?

Thanks.

1 ACCEPTED SOLUTION

avatar

Hi @Seaport , the "RegexSerDe" is in the contrib package, which is not supported officially, and as such you can use it in some parts of the platform but the different components may not give you full support for that.

I would recommend you to preprocess the datafiles to have a commonly consumable format (CSV) before ingesting them into the cluster. 

Alternatively you can ingest it into a table which has only a single (string) column, and then do the processing/validation/formatting/transforming of the data with inserting it into a proper final table with the columns you need. During the insert you can still use "regex" or "substring" type of functions / UDFs to extract the fields you need from the fixed-width datafiles (from the table with a single column).

I hope this helps,

Best regards, Miklos

View solution in original post

2 REPLIES 2

avatar

Hi @Seaport , the "RegexSerDe" is in the contrib package, which is not supported officially, and as such you can use it in some parts of the platform but the different components may not give you full support for that.

I would recommend you to preprocess the datafiles to have a commonly consumable format (CSV) before ingesting them into the cluster. 

Alternatively you can ingest it into a table which has only a single (string) column, and then do the processing/validation/formatting/transforming of the data with inserting it into a proper final table with the columns you need. During the insert you can still use "regex" or "substring" type of functions / UDFs to extract the fields you need from the fixed-width datafiles (from the table with a single column).

I hope this helps,

Best regards, Miklos

avatar
Expert Contributor

@mszurap Thanks for the response. I actually took the 2nd option you mentioned  -  ingesting it into a table which has only a single (string) column. But I am not sure whether it is the right approach. I appreciate the confirmation.

 

Regards,