10-14-2016 05:31 AM - edited 10-14-2016 05:33 AM
If the issues are only similar it may be better for @manpreet2 to post the second issue into a new thread to avoid confusion. If they are the same issue, please disregard my question and carry on. :)
11-07-2016 08:39 PM - edited 11-07-2016 08:41 PM
@kerjo, I'm seeing the same issue.
Given the following sqoop invocation:
sqoop \ import \ --map-column-hive LoggedTime=timestamp \ --connect jdbc:mysql://madb-t:3306/Sardata \ --username reader \ --password 'xx' \ --hive-table 'memory' \ --table 'Memory' \ --as-parquetfile \ --target-dir '/user/sardata/mem' \ --hive-overwrite \ --hive-import
I find that the --map-column-hive option is totally ignored. (It doesn't matter if I set it to LoggedTime=string, LoggedTime=blah, or PurpleMonkeyDishwasher=sdfjsaldkjf. It is ignored.)
I noticed tonight that if I avoid Parquet and instead use --as-textfile the --map-column-hive option works correctly. I do not know the reason for this behavior at the moment. And I still need to get the files into Parquet format for Impala to report against. Just wanted to share my observation.
11-28-2017 06:13 AM - edited 11-28-2017 06:23 AM
I was facing the same issue with map-column-java and map-column-hive while importing date/timestamp column from Oracle RDBMS to Hive parquet table using Sqoop. My Hive column is of type string.
Version CDH 5.12.1, Parquet 1.5.0.
A partial workaround I discovered is to convert the Date column to string using a free form query in Sqoop.
Input column in oracle (type Date):
select t.trans_id, to_char(t.trans_date_sysdate) trans_date_char from test_table_1 t where $CONDITIONS
Result in Hive looks like this:
Unfortunately, this is not enough, I am still not able to import the HH:mm:SS part.
11-30-2017 04:45 AM
EDIT: I have successfuly imported Date column from Oracle into Hive by modifiyng my query:
select t.trans_id, to_char(t.trans_date_sysdate,'YYYYMMDD_HH24MiSS') trans_date_char trans_date_char from test_table_1 t where $CONDITIONS
09-06-2018 07:37 PM
I'm also facing the same issue, it would be great if cloudera expert team pitches in and confirm whether it is bug or not and shed some light on other possible work arounds.
These issues are show stoppers, easy access to resolutions would be win win to everybody.