Reply
Champion
Posts: 600
Registered: ‎05-16-2016

Re: SQOOP IMPORT --map-column-hive ignored

kerjo

I was thinking a work around of type casting in the hive side . I understand that your map-column-hive is being ignored . Correct me if I am wrong.
Posts: 618
Kudos: 71
Solutions: 36
Registered: ‎04-06-2015

Re: SQOOP IMPORT --map-column-hive ignored

[ Edited ]

@kerjo and @manpreet2 I would like to clarify something. Are you both facing the same issue or something similar?

 

If the issues are only similar it may be better for @manpreet2 to post the second issue into a new thread to avoid confusion. If they are the same issue, please disregard my question and carry on. :) 

 




Cy Jervis, Community Manager - I'm not an expert but will supply relevant content from time to time. :)

Learn more about the Cloudera Community:


Terms of Service


Community Guidelines


How to use the forum

New Contributor
Posts: 2
Registered: ‎11-07-2016

Re: SQOOP IMPORT --map-column-hive ignored

[ Edited ]

@kerjo, I'm seeing the same issue.

 

  • CDH 5.8.2 (w/ sqoop 1.4.6)
  • MariaDB 5.5, or AWS Redshift, or..

Given the following sqoop invocation:

 

sqoop                                         \
import                                        \
--map-column-hive LoggedTime=timestamp        \
--connect jdbc:mysql://madb-t:3306/Sardata    \
--username reader                             \
--password 'xx'                               \
--hive-table 'memory'                         \
--table 'Memory'                              \
--as-parquetfile                              \
--target-dir '/user/sardata/mem'              \
--hive-overwrite                              \
--hive-import

 

I find that the --map-column-hive option is totally ignored. (It doesn't matter if I set it to LoggedTime=string, LoggedTime=blah, or PurpleMonkeyDishwasher=sdfjsaldkjf. It is ignored.)

 

I noticed tonight that if I avoid Parquet and instead use --as-textfile the --map-column-hive option works correctly. I do not know the reason for this behavior at the moment. And I still need to get the files into Parquet format for Impala to report against. Just wanted to share my observation.

--
Cloudera CDH 5.8.2 / Centos 7
Explorer
Posts: 20
Registered: ‎11-26-2015

Re: SQOOP IMPORT --map-column-hive ignored

Hi @eriks,

I am busy this week but I think I can try to build a fix next week.
Johann
New Contributor
Posts: 2
Registered: ‎04-12-2017

Re: SQOOP IMPORT --map-column-hive ignored

Hi @kerjo,

Did you fixed the map-column hive ignored issue?
Could you please share your ideas? It would be very helpful

Regards,
Ganeshbabu R
Explorer
Posts: 6
Registered: ‎10-26-2017

Re: SQOOP IMPORT --map-column-hive ignored

[ Edited ]

Hi kerjo,

 

I was facing the same issue with map-column-java and map-column-hive while importing date/timestamp column from Oracle RDBMS to Hive parquet table using Sqoop. My Hive column is of type string.

Version CDH 5.12.1, Parquet 1.5.0.

 

A partial workaround I discovered is to convert the Date column to string using a free form query in Sqoop.

 

Input column in oracle (type Date):

 

'16.11.2017 09:44:11'

 

--query

 

select t.trans_id, to_char(t.trans_date_sysdate) trans_date_char from test_table_1 t where $CONDITIONS

Result in Hive looks like this:

 

16-NOV-17

 

 

 

Unfortunately, this is not enough, I am still not able to import the HH:mm:SS part.

 

 

 

Explorer
Posts: 20
Registered: ‎11-26-2015

Re: SQOOP IMPORT --map-column-hive ignored

Hello,

 

I haven't done anything on this topic. My idea is to fix the code of the sqoop but I need one or two days to do it...

Highlighted
Explorer
Posts: 6
Registered: ‎10-26-2017

Re: SQOOP IMPORT --map-column-hive ignored

EDIT: I have successfuly imported Date column from Oracle into Hive by modifiyng my query:

 

select t.trans_id, to_char(t.trans_date_sysdate,'YYYYMMDD_HH24MiSS') trans_date_char trans_date_char from test_table_1 t where $CONDITIONS 

 

Announcements