Support Questions
Find answers, ask questions, and share your expertise
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

SQOOP IMPORT --map-column-hive ignored

Re: SQOOP IMPORT --map-column-hive ignored


I was thinking a work around of type casting in the hive side . I understand that your map-column-hive is being ignored . Correct me if I am wrong.

Re: SQOOP IMPORT --map-column-hive ignored

Community Manager

@kerjo and @manpreet2 I would like to clarify something. Are you both facing the same issue or something similar?


If the issues are only similar it may be better for @manpreet2 to post the second issue into a new thread to avoid confusion. If they are the same issue, please disregard my question and carry on. :) 

Cy Jervis, Community Manager

Was your question answered? Make sure to mark the answer as the accepted solution.
If you find a reply useful, say thanks by clicking on the thumbs up button.

Learn more about the Cloudera Community:
Community Guidelines
How to use the forum

Re: SQOOP IMPORT --map-column-hive ignored

New Contributor

@kerjo, I'm seeing the same issue.


  • CDH 5.8.2 (w/ sqoop 1.4.6)
  • MariaDB 5.5, or AWS Redshift, or..

Given the following sqoop invocation:


sqoop                                         \
import                                        \
--map-column-hive LoggedTime=timestamp        \
--connect jdbc:mysql://madb-t:3306/Sardata    \
--username reader                             \
--password 'xx'                               \
--hive-table 'memory'                         \
--table 'Memory'                              \
--as-parquetfile                              \
--target-dir '/user/sardata/mem'              \
--hive-overwrite                              \


I find that the --map-column-hive option is totally ignored. (It doesn't matter if I set it to LoggedTime=string, LoggedTime=blah, or PurpleMonkeyDishwasher=sdfjsaldkjf. It is ignored.)


I noticed tonight that if I avoid Parquet and instead use --as-textfile the --map-column-hive option works correctly. I do not know the reason for this behavior at the moment. And I still need to get the files into Parquet format for Impala to report against. Just wanted to share my observation.

Cloudera CDH 5.8.2 / Centos 7

Re: SQOOP IMPORT --map-column-hive ignored

Hi @eriks,

I am busy this week but I think I can try to build a fix next week.

Re: SQOOP IMPORT --map-column-hive ignored

New Contributor
Hi @kerjo,

Did you fixed the map-column hive ignored issue?
Could you please share your ideas? It would be very helpful

Ganeshbabu R

Re: SQOOP IMPORT --map-column-hive ignored


Hi kerjo,


I was facing the same issue with map-column-java and map-column-hive while importing date/timestamp column from Oracle RDBMS to Hive parquet table using Sqoop. My Hive column is of type string.

Version CDH 5.12.1, Parquet 1.5.0.


A partial workaround I discovered is to convert the Date column to string using a free form query in Sqoop.


Input column in oracle (type Date):


'16.11.2017 09:44:11'




select t.trans_id, to_char(t.trans_date_sysdate) trans_date_char from test_table_1 t where $CONDITIONS

Result in Hive looks like this:






Unfortunately, this is not enough, I am still not able to import the HH:mm:SS part.




Re: SQOOP IMPORT --map-column-hive ignored




I haven't done anything on this topic. My idea is to fix the code of the sqoop but I need one or two days to do it...

Re: SQOOP IMPORT --map-column-hive ignored


EDIT: I have successfuly imported Date column from Oracle into Hive by modifiyng my query:


select t.trans_id, to_char(t.trans_date_sysdate,'YYYYMMDD_HH24MiSS') trans_date_char trans_date_char from test_table_1 t where $CONDITIONS 


Re: SQOOP IMPORT --map-column-hive ignored


I'm also facing the same issue, it would be great if cloudera expert team pitches in and confirm whether it is bug or not and shed some light on other possible work arounds.


These issues are show stoppers, easy access to resolutions would be win win to everybody.


Re: SQOOP IMPORT --map-column-hive ignored

New Contributor

To add a little detail here, I have a column selection in my source query from Oracle like 



If I leave the `T` out of the ISO 8601 format (I have a space above), it comes through as a string into my parquet, however if I change this column to (notice the "T" in the date/time format):



then even with 

--map-column-java ACTIVITY_TSTZ_CHAR=String

in my sqoop command, I get a timestamp error: 


Error: SQLException in nextKeyValue
	at org.apache.sqoop.mapreduce.db.DBRecordReader.nextKeyValue(
	at org.apache.hadoop.mapred.MapTask$NewTrackingRecordReader.nextKeyValue(
	at org.apache.hadoop.mapreduce.task.MapContextImpl.nextKeyValue(
	at org.apache.hadoop.mapred.MapTask.runNewMapper(
	at org.apache.hadoop.mapred.YarnChild$
	at Method)
	at org.apache.hadoop.mapred.YarnChild.main(
Caused by: java.sql.SQLDataException: ORA-01821: date format not recognized` 



Don't have an account?
Coming from Hortonworks? Activate your account here