Support Questions

Find answers, ask questions, and share your expertise
Celebrating as our community reaches 100,000 members! Thank you!

Sqoop job fails with YARN container error 143


I have installed HDP 2.4 on a three node cluster. I have got Sqoop

I am trying to execute a sqoop command to import data from mysql (Ver 14.14) to HDFS.

When I run a import command the MR job ends up in 2 errors.

 sqoop import --connect jdbc:mysql:// --username anup --password-file /user/anup/.password --table employees --target-dir /data/landing/employees --delete-target-dir

1) It throws exception : com.mysql.jdbc.exceptions.jdbc4.CommunicationsException:

The last packet sent successfully to the server was 0 milliseconds ago. The driver has not received any packets from the server.
        at org.apache.sqoop.mapreduce.db.DBInputFormat.setConf(
        at org.apache.hadoop.util.ReflectionUtils.setConf(
        at org.apache.hadoop.util.ReflectionUtils.newInstance(
        at org.apache.hadoop.mapred.MapTask.runNewMapper(
        at org.apache.hadoop.mapred.YarnChild$
        at Method)
        at org.apache.hadoop.mapred.YarnChild.main(
Caused by: java.lang.RuntimeException: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: Communications link failure

2) It gives below error :

Container killed by the ApplicationMaster.
Container killed on request. Exit code is 143
Container exited with a non-zero exit code 143

The job ends as error and has message :

Job failed as tasks failed. failedMaps:1 failedReduces:0

17/08/21 05:59:15 INFO mapreduce.Job: Counters: 31
        File System Counters
                FILE: Number of bytes read=0
                FILE: Number of bytes written=469884
                FILE: Number of read operations=0
                FILE: Number of large read operations=0
                FILE: Number of write operations=0
                HDFS: Number of bytes read=347
                HDFS: Number of bytes written=9184962
                HDFS: Number of read operations=12
                HDFS: Number of large read operations=0
                HDFS: Number of write operations=6
        Job Counters
                Failed map tasks=6
                Launched map tasks=9
                Other local map tasks=9
                Total time spent by all maps in occupied slots (ms)=38059
                Total time spent by all reduces in occupied slots (ms)=0
                Total time spent by all map tasks (ms)=38059
                Total vcore-seconds taken by all map tasks=38059
                Total megabyte-seconds taken by all map tasks=58458624
        Map-Reduce Framework
                Map input records=200024
                Map output records=200024
                Input split bytes=347
                Spilled Records=0
                Failed Shuffles=0
                Merged Map outputs=0
                GC time elapsed (ms)=263
                CPU time spent (ms)=12740
                Physical memory (bytes) snapshot=807444480
                Virtual memory (bytes) snapshot=9736806400
                Total committed heap usage (bytes)=524812288
        File Input Format Counters
                Bytes Read=0
        File Output Format Counters
                Bytes Written=9184962
17/08/21 05:59:15 INFO mapreduce.ImportJobBase: Transferred 8.7595 MB in 32.0575 seconds (279.7996 KB/sec)
17/08/21 05:59:15 INFO mapreduce.ImportJobBase: Retrieved 200024 records.
17/08/21 05:59:15 ERROR tool.ImportTool: Error during import: Import job failed!

I am trying this for 2 mysql tables. For first table the data gets transferred despite of above errors . But for other no data is transferred.

I am also using Hive on same HDP and the MR jobs for Hive are running fine and doesn't give any error.

Please suggest a solution.


Master Mentor

This problem has been solved!

Want to get a detailed solution you have to login/registered on the community


 grant all on *.* to 'anup'@'%' identified by 'passwd';

This works fine. All exceptions from Sqoop job are vanished 🙂

Thank you @Jay SenSharma for working along. Also thanks @Venkata Sudheer Kumar M