<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: Sqoop2 Import from HDFS to MySQL database error in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop2-Import-from-HDFS-to-MySQL-database-error/m-p/22845#M4172</link>
    <description>&lt;P&gt;So I figured it out by myself, I had to first specify the Table column names in a Sqoop job like this:&lt;/P&gt;&lt;P&gt;filename,calling_number,calling_IMSI,called_number,calling_first_cell_id,calling_last_cell_id,starttime,endtime,duration,cause_for_termination,call_type&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;and then, in the data, I had to wrap all the strings by ' like this:&lt;/P&gt;&lt;P&gt;'WWWWWWW','EEEEEEEE','FFFFFFF','DDDDDDDDD','VVVVVVVVV','CCCCCCCCCCC','XXXXXXXXXX','YYYYYYYYYY','AAAAAAAAAAAA','RRRRRRRRRRR','SSSSSSS'&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;so probably the exporter would recognize that it is a string and not a number...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
    <pubDate>Thu, 18 Dec 2014 16:09:10 GMT</pubDate>
    <dc:creator>surovecv</dc:creator>
    <dc:date>2014-12-18T16:09:10Z</dc:date>
    <item>
      <title>Sqoop2 Import from HDFS to MySQL database error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop2-Import-from-HDFS-to-MySQL-database-error/m-p/22836#M4171</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;I am getting this error when I want to import .csv files from HDFS to MySQL database by using Sqoop2 job. I am using Cloudera Manager 5.1.2&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;The error is:&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;Container exited with a non-zero exit code 143 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.sqoop.common.SqoopException: MAPRED_EXEC_0018:Error occurs during loader run at org.apache.sqoop.job.mr.SqoopOutputFormatLoadExecutor$OutputFormatDataReader.readContent(SqoopOutputFormatLoadExecutor.java:175) at org.apache.sqoop.job.mr.SqoopOutputFormatLoadExecutor$OutputFormatDataReader.readArrayRecord(SqoopOutputFormatLoadExecutor.java:145) at org.apache.sqoop.connector.jdbc.GenericJdbcExportLoader.load(GenericJdbcExportLoader.java:48) at org.apache.sqoop.connector.jdbc.GenericJdbcExportLoader.load(GenericJdbcExportLoader.java:25) at org.apache.sqoop.job.mr.SqoopOutputFormatLoadExecutor$ConsumerThread.run(SqoopOutputFormatLoadExecutor.java:228) ... 5 more Caused &lt;FONT color="#FF0000"&gt;by: java.lang.NumberFormatException: For input string: "WWWWWWW" at&lt;/FONT&gt; java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Long.parseLong(Long.java:441) at java.lang.Long.parseLong(Long.java:483) at org.apache.sqoop.job.io.Data.parseField(Data.java:449) at org.apache.sqoop.job.io.Data.parse(Data.java:374) at org.apache.sqoop.job.io.Data.getContent(Data.java:88) at org.apache.sqoop.job.mr.SqoopOutputFormatLoadExecutor$OutputFormatDataReader.readContent(SqoopOutputFormatLoadExecutor.java:170) ... 9 more&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;My MySQL database is created like this:&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;CREATE TABLE moc (&lt;BR /&gt;&amp;nbsp;&amp;nbsp; id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,&lt;BR /&gt;&amp;nbsp;&amp;nbsp; filename VARCHAR(100) NOT NULL,&lt;BR /&gt;&amp;nbsp;&amp;nbsp; calling_number VARCHAR(100) NOT NULL,&lt;BR /&gt;&amp;nbsp;&amp;nbsp; calling_IMSI VARCHAR(100),&lt;BR /&gt;&amp;nbsp;&amp;nbsp; called_number VARCHAR(100) NOT NULL,&lt;BR /&gt;&amp;nbsp;&amp;nbsp; calling_first_cell_id VARCHAR(100),&lt;BR /&gt;&amp;nbsp;&amp;nbsp; calling_last_cell_id VARCHAR(100),&lt;BR /&gt;&amp;nbsp;&amp;nbsp; starttime VARCHAR(100),&lt;BR /&gt;&amp;nbsp;&amp;nbsp; endtime VARCHAR(100),&lt;BR /&gt;&amp;nbsp;&amp;nbsp; duration VARCHAR(100),&lt;BR /&gt;&amp;nbsp;&amp;nbsp; cause_for_termination VARCHAR(100),&lt;BR /&gt;&amp;nbsp;&amp;nbsp; call_type VARCHAR(100)&lt;BR /&gt;);&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="3"&gt;format of the .csv data is like this:&lt;/FONT&gt;&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;&lt;FONT color="#FF0000"&gt;WWWWWWW&lt;/FONT&gt;,RRGDSFSG,EEEEEEEE,FFFFFFF,DDDDDDDDD,VVVVVVVVV,CCCCCCCCCCC,XXXXXXXXXX,YYYYYYYYYY,AAAAAAAAAAAA,RRRRRRRRRRR&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;STRONG&gt;&lt;FONT size="3"&gt;I am using Hue&lt;/FONT&gt;&lt;/STRONG&gt;, where I set up my MySQL database through configuration settings in Hue like this (in "Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini")&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;[librdbms]&lt;BR /&gt;&amp;nbsp; [[databases]]&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; [[[mysql]]]&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; nice_name="MySQL Facebook DB"&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; name=facebook&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; engine=mysql&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; host=localhost&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; port=3306&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; user=root&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; password=root&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;When I set the new Sqoop2 job, I set these fields:&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;Job type: Export&lt;/P&gt;&lt;P&gt;Connection:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; JDBC Driver Class: com.mysql.jdbc.Driver&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp; JDBC Connection String: jdbc:mysql://localhost/facebook&lt;/P&gt;&lt;P&gt;then I fill in the table name ("moc") and choose the directory, where are the .csv files then I run the job&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;EM&gt;When I change my fields in the .csv files from string values, like above, to integer values, the import is successful. I dont know why this happened when all my fields in my table are all VARCHARs.&lt;/EM&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please help! &lt;span class="lia-unicode-emoji" title=":slightly_smiling_face:"&gt;🙂&lt;/span&gt; Thank you in advance!&lt;/P&gt;&lt;P&gt;Best regards,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Václav Surovec&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;Syslog:&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; 2014-12-18 14:07:47,240 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1418391341181_0102_000001&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:47,585 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;&amp;nbsp; Ignoring.&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:47,606 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;&amp;nbsp; Ignoring.&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:47,727 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:47,727 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, Service: , Ident: (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@67ea0e66)&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:47,764 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max attempts: 2 for application: 102. Attempt num: 1 is last retry: false&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:47,772 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred newApiCommitter.&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:47,933 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;&amp;nbsp; Ignoring.&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:47,948 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;&amp;nbsp; Ignoring.&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:48,670 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config null&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:48,748 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is org.apache.sqoop.job.mr.SqoopNullOutputFormat$DestroyerOutputCommitter&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:48,779 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.jobhistory.EventType for class org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:48,781 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:48,783 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:48,784 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:48,785 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:48,787 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:48,788 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:48,789 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:48,906 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,260 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,335 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,335 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics system started&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,347 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for job_1418391341181_0102 to jobTokenSecretManager&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,673 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing job_1418391341181_0102 because: not enabled;&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,698 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job job_1418391341181_0102 = 0. Number of splits = 9&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,698 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for job job_1418391341181_0102 = 0&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,698 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1418391341181_0102Job Transitioned from NEW to INITED&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,700 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching normal, non-uberized, multi-container job job_1418391341181_0102.&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,745 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,756 INFO [Socket Reader #1 for port 35125] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 35125&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,784 INFO [main] org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,784 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,784 INFO [IPC Server listener on 35125] org.apache.hadoop.ipc.Server: IPC Server listener on 35125: starting&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,786 INFO [main] org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated MRClientService at mob1l0r0k.appdb.ngIBMD.prod.bide.de.tmo/10.99.230.58:35125&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,884 INFO [main] org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,890 INFO [main] org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.mapreduce is not defined&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,908 INFO [main] org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,917 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context mapreduce&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,917 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context static&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,923 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /mapreduce/*&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,923 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /ws/*&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,940 INFO [main] org.apache.hadoop.http.HttpServer2: Jetty bound to port 46398&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,940 INFO [main] org.mortbay.log: jetty-6.1.26&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:49,975 INFO [main] org.mortbay.log: Extract jar:file:/pkg/moip/mo10755/work/mzpl/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common-2.3.0-cdh5.1.2.jar!/webapps/mapreduce to /tmp/Jetty_0_0_0_0_46398_mapreduce____tsfox5/webapp&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:50,329 INFO [main] org.mortbay.log: Started SelectChannelConnector@0.0.0.0:46398&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:50,330 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Web app /mapreduce started at 46398&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:50,776 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:50,783 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:50,783 INFO [Socket Reader #1 for port 63660] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 63660&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:50,790 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:50,790 INFO [IPC Server listener on 63660] org.apache.hadoop.ipc.Server: IPC Server listener on 63660: starting&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:50,813 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: nodeBlacklistingEnabled:true&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:50,813 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: maxTaskFailuresPerNode is 3&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:50,814 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: blacklistDisablePercent is 33&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:50,899 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;&amp;nbsp; Ignoring.&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:50,906 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;&amp;nbsp; Ignoring.&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:50,910 INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at mob1l0r0k.appdb.ngIBMD.prod.bide.de.tmo/10.99.230.58:8030&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:50,989 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: maxContainerCapability: 8192&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:50,989 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: root.sqoop2&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:50,994 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper limit on the thread pool size is 500&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:50,997 INFO [main] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: yarn.client.max-nodemanagers-proxies : 500&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,005 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1418391341181_0102Job Transitioned from INITED to SETUP&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,058 INFO [CommitterEvent Processor #0] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_SETUP&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,069 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1418391341181_0102Job Transitioned from SETUP to RUNNING&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,097 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1418391341181_0102_m_000000 Task Transitioned from NEW to SCHEDULED&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,098 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1418391341181_0102_m_000001 Task Transitioned from NEW to SCHEDULED&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,098 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1418391341181_0102_m_000002 Task Transitioned from NEW to SCHEDULED&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,098 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1418391341181_0102_m_000003 Task Transitioned from NEW to SCHEDULED&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,099 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1418391341181_0102_m_000004 Task Transitioned from NEW to SCHEDULED&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,099 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1418391341181_0102_m_000005 Task Transitioned from NEW to SCHEDULED&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,099 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1418391341181_0102_m_000006 Task Transitioned from NEW to SCHEDULED&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,100 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1418391341181_0102_m_000007 Task Transitioned from NEW to SCHEDULED&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,100 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1418391341181_0102_m_000008 Task Transitioned from NEW to SCHEDULED&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,102 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1418391341181_0102_m_000000_0 TaskAttempt Transitioned from NEW to UNASSIGNED&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,102 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1418391341181_0102_m_000001_0 TaskAttempt Transitioned from NEW to UNASSIGNED&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,102 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1418391341181_0102_m_000002_0 TaskAttempt Transitioned from NEW to UNASSIGNED&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,102 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1418391341181_0102_m_000003_0 TaskAttempt Transitioned from NEW to UNASSIGNED&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,102 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1418391341181_0102_m_000004_0 TaskAttempt Transitioned from NEW to UNASSIGNED&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,103 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1418391341181_0102_m_000005_0 TaskAttempt Transitioned from NEW to UNASSIGNED&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,103 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1418391341181_0102_m_000006_0 TaskAttempt Transitioned from NEW to UNASSIGNED&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,103 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1418391341181_0102_m_000007_0 TaskAttempt Transitioned from NEW to UNASSIGNED&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,103 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1418391341181_0102_m_000008_0 TaskAttempt Transitioned from NEW to UNASSIGNED&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,105 INFO [Thread-51] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: mapResourceReqt:1024&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,151 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer setup for JobId: job_1418391341181_0102, File: hdfs://mob1l0r0k.appdb.ngIBMD.prod.bide.de.tmo:8020/mapred/sqoop2/.staging/job_1418391341181_0102/job_1418391341181_0102_1.jhist&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:51,993 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:9 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 HostLocal:0 RackLocal:0&lt;BR /&gt;&amp;nbsp; 2014-12-18 14:07:52,040 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1418391341181_0102: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=&amp;lt;memory:0, vCores:0&amp;gt; knownNMs=1&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;FONT size="3"&gt;&lt;STRONG&gt;stderr:&lt;/STRONG&gt;&lt;/FONT&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;Dec 18, 2014 2:07:52 PM com.google.inject.servlet.InternalServletModule$BackwardsCompatibleServletContextProvider get&lt;BR /&gt;&amp;nbsp; WARNING: You are attempting to use a deprecated API (specifically, attempting to @Inject ServletContext inside an eagerly created singleton. While we allow this for backwards compatibility, be warned that this MAY have unexpected behavior if you have more than one injector (with ServletModule) running in the same JVM. Please consult the Guice documentation at &lt;A target="_blank" href="http://code.google.com/p/google-guice/wiki/Servlets"&gt;http://code.google.com/p/google-guice/wiki/Servlets&lt;/A&gt; for more information.&lt;BR /&gt;&amp;nbsp; Dec 18, 2014 2:07:52 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register&lt;BR /&gt;&amp;nbsp; INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver as a provider class&lt;BR /&gt;&amp;nbsp; Dec 18, 2014 2:07:52 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register&lt;BR /&gt;&amp;nbsp; INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class&lt;BR /&gt;&amp;nbsp; Dec 18, 2014 2:07:52 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register&lt;BR /&gt;&amp;nbsp; INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices as a root resource class&lt;BR /&gt;&amp;nbsp; Dec 18, 2014 2:07:52 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate&lt;BR /&gt;&amp;nbsp; INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM'&lt;BR /&gt;&amp;nbsp; Dec 18, 2014 2:07:52 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider&lt;BR /&gt;&amp;nbsp; INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton"&lt;BR /&gt;&amp;nbsp; Dec 18, 2014 2:07:52 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider&lt;BR /&gt;&amp;nbsp; INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton"&lt;BR /&gt;&amp;nbsp; Dec 18, 2014 2:07:53 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider&lt;BR /&gt;&amp;nbsp; INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices to GuiceManagedComponentProvider with the scope "PerRequest"&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 2014-12-18 14:08:02,465 [main] INFO&amp;nbsp; org.apache.sqoop.job.mr.SqoopOutputFormatLoadExecutor&amp;nbsp; - SqoopOutputFormatLoadExecutor::SqoopRecordWriter is closed&lt;BR /&gt;2014-12-18 14:08:07,768 [main] INFO&amp;nbsp; org.apache.sqoop.job.etl.HdfsExportExtractor&amp;nbsp; - Start position: 107&lt;BR /&gt;2014-12-18 14:08:07,768 [main] INFO&amp;nbsp; org.apache.sqoop.job.etl.HdfsExportExtractor&amp;nbsp; - Extracting ended on position: 107&lt;BR /&gt;2014-12-18 14:08:07,768 [main] INFO&amp;nbsp; org.apache.sqoop.job.mr.SqoopMapper&amp;nbsp; - Extractor has finished&lt;BR /&gt;2014-12-18 14:08:07,770 [main] INFO&amp;nbsp; org.apache.sqoop.job.mr.SqoopMapper&amp;nbsp; - Stopping progress service&lt;BR /&gt;2014-12-18 14:08:07,775 [main] INFO&amp;nbsp; org.apache.sqoop.job.mr.SqoopOutputFormatLoadExecutor&amp;nbsp; - SqoopOutputFormatLoadExecutor::SqoopRecordWriter is about to be closed&lt;BR /&gt;2014-12-18 14:08:07,989 [OutputFormatLoader-consumer] INFO&amp;nbsp; org.apache.sqoop.job.mr.SqoopOutputFormatLoadExecutor&amp;nbsp; - Loader has finished&lt;BR /&gt;2014-12-18 14:08:07,989 [main] INFO&amp;nbsp; org.apache.sqoop.job.mr.SqoopOutputFormatLoadExecutor&amp;nbsp; - SqoopOutputFormatLoadExecutor::SqoopRecordWriter is closed&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Tue, 21 Apr 2026 13:59:41 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop2-Import-from-HDFS-to-MySQL-database-error/m-p/22836#M4171</guid>
      <dc:creator>surovecv</dc:creator>
      <dc:date>2026-04-21T13:59:41Z</dc:date>
    </item>
    <item>
      <title>Re: Sqoop2 Import from HDFS to MySQL database error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop2-Import-from-HDFS-to-MySQL-database-error/m-p/22845#M4172</link>
      <description>&lt;P&gt;So I figured it out by myself, I had to first specify the Table column names in a Sqoop job like this:&lt;/P&gt;&lt;P&gt;filename,calling_number,calling_IMSI,called_number,calling_first_cell_id,calling_last_cell_id,starttime,endtime,duration,cause_for_termination,call_type&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;and then, in the data, I had to wrap all the strings by ' like this:&lt;/P&gt;&lt;P&gt;'WWWWWWW','EEEEEEEE','FFFFFFF','DDDDDDDDD','VVVVVVVVV','CCCCCCCCCCC','XXXXXXXXXX','YYYYYYYYYY','AAAAAAAAAAAA','RRRRRRRRRRR','SSSSSSS'&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;so probably the exporter would recognize that it is a string and not a number...&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 18 Dec 2014 16:09:10 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/Sqoop2-Import-from-HDFS-to-MySQL-database-error/m-p/22845#M4172</guid>
      <dc:creator>surovecv</dc:creator>
      <dc:date>2014-12-18T16:09:10Z</dc:date>
    </item>
  </channel>
</rss>

