10-14-2016 05:31 AM - edited 10-14-2016 05:33 AM
If the issues are only similar it may be better for @manpreet2 to post the second issue into a new thread to avoid confusion. If they are the same issue, please disregard my question and carry on. :)
Cy Jervis, Community Moderator - I'm not an expert but will supply relevant content from time to time. :)
Learn more about the Cloudera Community:
11-07-2016 08:39 PM - edited 11-07-2016 08:41 PM
@kerjo, I'm seeing the same issue.
Given the following sqoop invocation:
sqoop \ import \ --map-column-hive LoggedTime=timestamp \ --connect jdbc:mysql://madb-t:3306/Sardata \ --username reader \ --password 'xx' \ --hive-table 'memory' \ --table 'Memory' \ --as-parquetfile \ --target-dir '/user/sardata/mem' \ --hive-overwrite \ --hive-import
I find that the --map-column-hive option is totally ignored. (It doesn't matter if I set it to LoggedTime=string, LoggedTime=blah, or PurpleMonkeyDishwasher=sdfjsaldkjf. It is ignored.)
I noticed tonight that if I avoid Parquet and instead use --as-textfile the --map-column-hive option works correctly. I do not know the reason for this behavior at the moment. And I still need to get the files into Parquet format for Impala to report against. Just wanted to share my observation.