Created on 03-23-2018 02:52 AM - edited 09-16-2022 06:01 AM
I'm importing 5.6 GB table
and the error:
Error: java.io.IOException: mysqldump terminated with status 2 at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:485) at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:49) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)
Created 03-23-2018 06:18 AM
So the issue is with the "PK" column used in distributing the data in case of multiple mappers. It has always been recommended that an "integral" column is used as the "split by" column and your import is trying to use the column "CustID" which is String.
Have a look at how your splits are calculated during the import.
8020 [main] WARN org.apache.sqoop.mapreduce.db.TextSplitter - You are strongly encouraged to choose an integral split column. 8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat - Creating input split with lower bound '`CustID` >= '1'' and upper bound '`CustID` < '3?????'' 8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat - Creating input split with lower bound '`CustID` >= '1'' and upper bound '`CustID` < '3?????'' 8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat - Creating input split with lower bound '`CustID` >= '3?????'' and upper bound '`CustID` < '5?????'' 8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat - Creating input split with lower bound '`CustID` >= '3?????'' and upper bound '`CustID` < '5?????'' 8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat - Creating input split with lower bound '`CustID` >= '5?????'' and upper bound '`CustID` < '7*?????'' 8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat - Creating input split with lower bound '`CustID` >= '5?????'' and upper bound '`CustID` < '7*?????'' 8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat - Creating input split with lower bound '`CustID` >= '7*?????'' and upper bound '`CustID` <= '999999'' 8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat - Creating input split with lower bound '`CustID` >= '7*?????'' and upper bound '`CustID` <= '999999'' 8068 [main] INFO org.apache.hadoop.mapreduce.JobSubmitter - number of splits:4
The "?" indicates some foreign characters probably not parsed properly and hence resulted in the failure of your tasks.
However, when you have only a single mapper, there is no such parse needed for CustID column and the data is "copied and pasted" to HDFS and your job ends OK.
Created 03-23-2018 03:36 AM
Can you please share your sqoop command? Are you using --direct by any chance?
Created 03-23-2018 04:08 AM
import -Dorg.apache.sqoop.splitter.allow_text_splitter=true --connect jdbc:mysql://x.x.x.x:xxxx/kpcustomers --username root --password ******* --table customers --fields-terminated-by | -m 5 --hive-import --hive-overwrite --hive-table testing.customers --direct --verbose
Created 03-23-2018 04:22 AM
Are you using "split-by" column while not setting number of mappers to 1?
Created 03-23-2018 04:34 AM
Also, can you please share the "actual" MR job logs that you can see when you are running your job with multiple mappers?
Created 03-23-2018 04:46 AM
NO, I don't use split-by when I set mappers to 1 or in any
Created 03-23-2018 04:54 AM
here's the stderr and stdout
Created 03-23-2018 04:26 AM
yes I am using --direct
Created 03-23-2018 04:57 AM
The import will succeed if I will set the mappers to 1. I also notice that when I'm not using 1 mapper the yarn memory will be fully consumed
Created 03-23-2018 06:18 AM
So the issue is with the "PK" column used in distributing the data in case of multiple mappers. It has always been recommended that an "integral" column is used as the "split by" column and your import is trying to use the column "CustID" which is String.
Have a look at how your splits are calculated during the import.
8020 [main] WARN org.apache.sqoop.mapreduce.db.TextSplitter - You are strongly encouraged to choose an integral split column. 8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat - Creating input split with lower bound '`CustID` >= '1'' and upper bound '`CustID` < '3?????'' 8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat - Creating input split with lower bound '`CustID` >= '1'' and upper bound '`CustID` < '3?????'' 8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat - Creating input split with lower bound '`CustID` >= '3?????'' and upper bound '`CustID` < '5?????'' 8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat - Creating input split with lower bound '`CustID` >= '3?????'' and upper bound '`CustID` < '5?????'' 8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat - Creating input split with lower bound '`CustID` >= '5?????'' and upper bound '`CustID` < '7*?????'' 8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat - Creating input split with lower bound '`CustID` >= '5?????'' and upper bound '`CustID` < '7*?????'' 8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat - Creating input split with lower bound '`CustID` >= '7*?????'' and upper bound '`CustID` <= '999999'' 8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat - Creating input split with lower bound '`CustID` >= '7*?????'' and upper bound '`CustID` <= '999999'' 8068 [main] INFO org.apache.hadoop.mapreduce.JobSubmitter - number of splits:4
The "?" indicates some foreign characters probably not parsed properly and hence resulted in the failure of your tasks.
However, when you have only a single mapper, there is no such parse needed for CustID column and the data is "copied and pasted" to HDFS and your job ends OK.