Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Sqoop job fails with YARN container error 143

avatar

I have installed HDP 2.4 on a three node cluster. I have got Sqoop 1.4.6.2.4.3.0-227.

I am trying to execute a sqoop command to import data from mysql (Ver 14.14) to HDFS.

When I run a import command the MR job ends up in 2 errors.

 sqoop import --connect jdbc:mysql://hdp25-node2.wulme4ci31tu3lwdofvykqwgkh.bx.internal.cloudapp.net/employees --username anup --password-file /user/anup/.password --table employees --target-dir /data/landing/employees --delete-target-dir

1) It throws exception : com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
        at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(DBInputFormat.java:167)
        at org.apache.hadoop.util.ReflectionUtils.setConf(ReflectionUtils.java:76)
        at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:136)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:749)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:341)
        at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:168)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1724)
        at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:162)
Caused by: java.lang.RuntimeException: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

2) It gives below error :

Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

The job ends as error and has message :

Job failed as tasks failed. failedMaps:1 failedReduces:0


17/08/21 05:59:15 INFO mapreduce.Job: Counters: 31
        File System Counters
                FILE: Number of bytes read=0
                FILE: Number of bytes written=469884
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=347
                HDFS: Number of bytes written=9184962
                HDFS: Number of read operations=12
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=6
        Job Counters
                Failed map tasks=6
                Launched map tasks=9
                Other local map tasks=9
                Total time spent by all maps in occupied slots (ms)=38059
                Total time spent by all reduces in occupied slots (ms)=0
                Total time spent by all map tasks (ms)=38059
                Total vcore-seconds taken by all map tasks=38059
                Total megabyte-seconds taken by all map tasks=58458624
        Map-Reduce Framework
                Map input records=200024
                Map output records=200024
                Input split bytes=347
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=263
                CPU time spent (ms)=12740
                Physical memory (bytes) snapshot=807444480
                Virtual memory (bytes) snapshot=9736806400
                Total committed heap usage (bytes)=524812288
        File Input Format Counters
                Bytes Read=0
        File Output Format Counters
                Bytes Written=9184962
17/08/21 05:59:15 INFO mapreduce.ImportJobBase: Transferred 8.7595 MB in 32.0575 seconds (279.7996 KB/sec)
17/08/21 05:59:15 INFO mapreduce.ImportJobBase: Retrieved 200024 records.
17/08/21 05:59:15 ERROR tool.ImportTool: Error during import: Import job failed!

I am trying this for 2 mysql tables. For first table the data gets transferred despite of above errors . But for other no data is transferred.

I am also using Hive on same HDP and the MR jobs for Hive are running fine and doesn't give any error.

Please suggest a solution.

1 ACCEPTED SOLUTION

avatar
Master Mentor

@Anup Shirolkar

Please try using "%" wildcard for the granting.

Example:

grant all on *.* to 'anup'@'%.cloudapp.net';
OR
grant all on *.* to 'anup'@'%';

.

Please see the doc for more details: https://dev.mysql.com/doc/refman/5.7/en/grant.html

The_and%wildcards are permitted when specifying database names inGRANTstatements that grant privileges at the database level.

View solution in original post

10 REPLIES 10

avatar
 grant all on *.* to 'anup'@'%' identified by 'passwd';

This works fine. All exceptions from Sqoop job are vanished 🙂

Thank you @Jay SenSharma for working along. Also thanks @Venkata Sudheer Kumar M