Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Unable to import(using sqoop) data into hive partitioned table stored in parquet format

Unable to import(using sqoop) data into hive partitioned table stored in parquet format

New Contributor

Is anyone familiar with importing data into hive partitioned table which follows parquet format.

 

Any recommendations/correction to the below sqoop import.

 

Sqoop Command:

 

sqoop import --connect "jdbc:oracle:thin:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=XXXX)(PORT=1521))(CONNECT_DATA=(SERVER=dedicated)(SERVICE_NAME=XXXXX)))" --username XXXX --password XXXX --query " SELECT QUERY" -m 1 --hive-import --hive-database XXXX--hive-table XXXXX --target-dir /XXXX/XXXX/ --hive-partition-key part_key --hive-partition-value 'part_key_val' --as-parquetfile 

 

Error: java.lang.IllegalArgumentException: Cannot construct key, missing provided value: patition_key: part_key
at org.kitesdk.shaded.com.google.common.base.Preconditions.checkArgument(Preconditions.java:115)
at org.kitesdk.data.spi.EntityAccessor.partitionValue(EntityAccessor.java:128)
at org.kitesdk.data.spi.EntityAccessor.keyFor(EntityAccessor.java:111)
at org.kitesdk.data.spi.filesystem.PartitionedDatasetWriter.write(PartitionedDatasetWriter.java:158)
at org.kitesdk.data.mapreduce.DatasetKeyOutputFormat$DatasetRecordWriter.write(DatasetKeyOutputFormat.java:325)
at org.kitesdk.data.mapreduce.DatasetKeyOutputFormat$DatasetRecordWriter.write(DatasetKeyOutputFormat.java:304)
at org.apache.hadoop.mapred.MapTask$NewDirectOutputCollector.write(MapTask.java:664)
at org.apache.hadoop.mapreduce.task.TaskInputOutputContextImpl.write(TaskInputOutputContextImpl.java:89)
at org.apache.hadoop.mapreduce.lib.map.WrappedMapper$Context.write(WrappedMapper.java:112)
at org.apache.sqoop.mapreduce.ParquetImportMapper.map(ParquetImportMapper.java:70)
at org.apache.sqoop.mapreduce.ParquetImportMapper.map(ParquetImportMapper.java:39)
at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
at org.apache.sqoop.mapreduce.AutoProgressMapper.run(AutoProgressMapper.java:64)
at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:793)
at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:422)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)

Don't have an account?
Coming from Hortonworks? Activate your account here