Member since
11-07-2016
2
Posts
1
Kudos Received
0
Solutions
11-07-2016
08:39 PM
1 Kudo
@kerjo, I'm seeing the same issue. CDH 5.8.2 (w/ sqoop 1.4.6) MariaDB 5.5, or AWS Redshift, or.. Given the following sqoop invocation: sqoop \
import \
--map-column-hive LoggedTime=timestamp \
--connect jdbc:mysql://madb-t:3306/Sardata \
--username reader \
--password 'xx' \
--hive-table 'memory' \
--table 'Memory' \
--as-parquetfile \
--target-dir '/user/sardata/mem' \
--hive-overwrite \
--hive-import I find that the --map-column-hive option is totally ignored. (It doesn't matter if I set it to LoggedTime=string, LoggedTime=blah, or PurpleMonkeyDishwasher=sdfjsaldkjf. It is ignored.) I noticed tonight that if I avoid Parquet and instead use --as-textfile the --map-column-hive option works correctly. I do not know the reason for this behavior at the moment. And I still need to get the files into Parquet format for Impala to report against. Just wanted to share my observation.
... View more