@Rush
Hive overwrite first copies data to temporary hdfs directory then move the data to hive table.
Since Sqoop breaks down export process into multiple splits based on mappers, it is possible that a failed import job may result in partial data being committed to the hive table.
Solution:
By specifying a staging table via the --staging-table option which acts as an auxiliary table that is used to stage exported data. The staged data is finally moved to the destination table in a single transaction.
for more reference regards to sqoop staging table: https://sqoop.apache.org/docs/1.4.0-incubating/SqoopUserGuide.html