Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

why do I get error during sqoop import every time i will not set the mapper to 1,

avatar
Contributor

I'm importing 5.6 GB table

and the error:

Error: java.io.IOException: mysqldump terminated with status 2 at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:485) at org.apache.sqoop.mapreduce.MySQLDumpMapper.map(MySQLDumpMapper.java:49) at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:146) at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:787) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:170) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1866) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:164)

1 ACCEPTED SOLUTION

avatar

So the issue is with the "PK" column used in distributing the data in case of multiple mappers. It has always been recommended that an "integral" column is used as the "split by" column and your import is trying to use the column "CustID" which is String.

Have a look at how your splits are calculated during the import.

8020 [main] WARN  org.apache.sqoop.mapreduce.db.TextSplitter  - You are strongly encouraged to choose an integral split column.
8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat  - Creating input split with lower bound '`CustID` >= '1'' and upper bound '`CustID` < '3?????''
8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat  - Creating input split with lower bound '`CustID` >= '1'' and upper bound '`CustID` < '3?????''
8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat  - Creating input split with lower bound '`CustID` >= '3?????'' and upper bound '`CustID` < '5?????''
8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat  - Creating input split with lower bound '`CustID` >= '3?????'' and upper bound '`CustID` < '5?????''
8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat  - Creating input split with lower bound '`CustID` >= '5?????'' and upper bound '`CustID` < '7*?????''
8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat  - Creating input split with lower bound '`CustID` >= '5?????'' and upper bound '`CustID` < '7*?????''
8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat  - Creating input split with lower bound '`CustID` >= '7*?????'' and upper bound '`CustID` <= '999999''
8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat  - Creating input split with lower bound '`CustID` >= '7*?????'' and upper bound '`CustID` <= '999999''
8068 [main] INFO  org.apache.hadoop.mapreduce.JobSubmitter  - number of splits:4

The "?" indicates some foreign characters probably not parsed properly and hence resulted in the failure of your tasks.

However, when you have only a single mapper, there is no such parse needed for CustID column and the data is "copied and pasted" to HDFS and your job ends OK.

View solution in original post

10 REPLIES 10

avatar
@Christian Lunesa

Can you please share your sqoop command? Are you using --direct by any chance?

avatar
Contributor

import -Dorg.apache.sqoop.splitter.allow_text_splitter=true --connect jdbc:mysql://x.x.x.x:xxxx/kpcustomers --username root --password ******* --table customers --fields-terminated-by | -m 5 --hive-import --hive-overwrite --hive-table testing.customers --direct --verbose

avatar

Are you using "split-by" column while not setting number of mappers to 1?

avatar

Also, can you please share the "actual" MR job logs that you can see when you are running your job with multiple mappers?

avatar
Contributor

NO, I don't use split-by when I set mappers to 1 or in any

avatar
Contributor

here's the stderr and stdout

stdout.txt & stderr.txt

avatar
Contributor

yes I am using --direct

avatar
Contributor

The import will succeed if I will set the mappers to 1. I also notice that when I'm not using 1 mapper the yarn memory will be fully consumed

avatar

So the issue is with the "PK" column used in distributing the data in case of multiple mappers. It has always been recommended that an "integral" column is used as the "split by" column and your import is trying to use the column "CustID" which is String.

Have a look at how your splits are calculated during the import.

8020 [main] WARN  org.apache.sqoop.mapreduce.db.TextSplitter  - You are strongly encouraged to choose an integral split column.
8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat  - Creating input split with lower bound '`CustID` >= '1'' and upper bound '`CustID` < '3?????''
8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat  - Creating input split with lower bound '`CustID` >= '1'' and upper bound '`CustID` < '3?????''
8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat  - Creating input split with lower bound '`CustID` >= '3?????'' and upper bound '`CustID` < '5?????''
8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat  - Creating input split with lower bound '`CustID` >= '3?????'' and upper bound '`CustID` < '5?????''
8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat  - Creating input split with lower bound '`CustID` >= '5?????'' and upper bound '`CustID` < '7*?????''
8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat  - Creating input split with lower bound '`CustID` >= '5?????'' and upper bound '`CustID` < '7*?????''
8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat  - Creating input split with lower bound '`CustID` >= '7*?????'' and upper bound '`CustID` <= '999999''
8025 [main] DEBUG org.apache.sqoop.mapreduce.db.DataDrivenDBInputFormat  - Creating input split with lower bound '`CustID` >= '7*?????'' and upper bound '`CustID` <= '999999''
8068 [main] INFO  org.apache.hadoop.mapreduce.JobSubmitter  - number of splits:4

The "?" indicates some foreign characters probably not parsed properly and hence resulted in the failure of your tasks.

However, when you have only a single mapper, there is no such parse needed for CustID column and the data is "copied and pasted" to HDFS and your job ends OK.