Support Questions

Find answers, ask questions, and share your expertise

Sqoop2 Import from HDFS to MySQL database error

avatar
Rising Star

Hi,

I am getting this error when I want to import .csv files from HDFS to MySQL database by using Sqoop2 job. I am using Cloudera Manager 5.1.2

 

The error is:

Container exited with a non-zero exit code 143 at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by: org.apache.sqoop.common.SqoopException: MAPRED_EXEC_0018:Error occurs during loader run at org.apache.sqoop.job.mr.SqoopOutputFormatLoadExecutor$OutputFormatDataReader.readContent(SqoopOutputFormatLoadExecutor.java:175) at org.apache.sqoop.job.mr.SqoopOutputFormatLoadExecutor$OutputFormatDataReader.readArrayRecord(SqoopOutputFormatLoadExecutor.java:145) at org.apache.sqoop.connector.jdbc.GenericJdbcExportLoader.load(GenericJdbcExportLoader.java:48) at org.apache.sqoop.connector.jdbc.GenericJdbcExportLoader.load(GenericJdbcExportLoader.java:25) at org.apache.sqoop.job.mr.SqoopOutputFormatLoadExecutor$ConsumerThread.run(SqoopOutputFormatLoadExecutor.java:228) ... 5 more Caused by: java.lang.NumberFormatException: For input string: "WWWWWWW" at java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) at java.lang.Long.parseLong(Long.java:441) at java.lang.Long.parseLong(Long.java:483) at org.apache.sqoop.job.io.Data.parseField(Data.java:449) at org.apache.sqoop.job.io.Data.parse(Data.java:374) at org.apache.sqoop.job.io.Data.getContent(Data.java:88) at org.apache.sqoop.job.mr.SqoopOutputFormatLoadExecutor$OutputFormatDataReader.readContent(SqoopOutputFormatLoadExecutor.java:170) ... 9 more

 

My MySQL database is created like this:

CREATE TABLE moc (
   id INT NOT NULL AUTO_INCREMENT PRIMARY KEY,
   filename VARCHAR(100) NOT NULL,
   calling_number VARCHAR(100) NOT NULL,
   calling_IMSI VARCHAR(100),
   called_number VARCHAR(100) NOT NULL,
   calling_first_cell_id VARCHAR(100),
   calling_last_cell_id VARCHAR(100),
   starttime VARCHAR(100),
   endtime VARCHAR(100),
   duration VARCHAR(100),
   cause_for_termination VARCHAR(100),
   call_type VARCHAR(100)
);

 

format of the .csv data is like this:

WWWWWWW,RRGDSFSG,EEEEEEEE,FFFFFFF,DDDDDDDDD,VVVVVVVVV,CCCCCCCCCCC,XXXXXXXXXX,YYYYYYYYYY,AAAAAAAAAAAA,RRRRRRRRRRR

 

I am using Hue, where I set up my MySQL database through configuration settings in Hue like this (in "Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini")

 

[librdbms]
  [[databases]]
    [[[mysql]]]
      nice_name="MySQL Facebook DB"
      name=facebook
      engine=mysql
      host=localhost
      port=3306
      user=root
      password=root

 

When I set the new Sqoop2 job, I set these fields:

Job type: Export

Connection:

   JDBC Driver Class: com.mysql.jdbc.Driver

   JDBC Connection String: jdbc:mysql://localhost/facebook

then I fill in the table name ("moc") and choose the directory, where are the .csv files then I run the job

 

When I change my fields in the .csv files from string values, like above, to integer values, the import is successful. I dont know why this happened when all my fields in my table are all VARCHARs.

 

Please help! 🙂 Thank you in advance!

Best regards,

 

Václav Surovec

 

 

Syslog:

  2014-12-18 14:07:47,240 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1418391341181_0102_000001
  2014-12-18 14:07:47,585 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
  2014-12-18 14:07:47,606 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
  2014-12-18 14:07:47,727 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
  2014-12-18 14:07:47,727 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, Service: , Ident: (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@67ea0e66)
  2014-12-18 14:07:47,764 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: The specific max attempts: 2 for application: 102. Attempt num: 1 is last retry: false
  2014-12-18 14:07:47,772 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Using mapred newApiCommitter.
  2014-12-18 14:07:47,933 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
  2014-12-18 14:07:47,948 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
  2014-12-18 14:07:48,670 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config null
  2014-12-18 14:07:48,748 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is org.apache.sqoop.job.mr.SqoopNullOutputFormat$DestroyerOutputCommitter
  2014-12-18 14:07:48,779 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.jobhistory.EventType for class org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
  2014-12-18 14:07:48,781 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
  2014-12-18 14:07:48,783 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
  2014-12-18 14:07:48,784 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
  2014-12-18 14:07:48,785 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
  2014-12-18 14:07:48,787 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
  2014-12-18 14:07:48,788 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
  2014-12-18 14:07:48,789 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
  2014-12-18 14:07:48,906 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
  2014-12-18 14:07:49,260 INFO [main] org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
  2014-12-18 14:07:49,335 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
  2014-12-18 14:07:49,335 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics system started
  2014-12-18 14:07:49,347 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for job_1418391341181_0102 to jobTokenSecretManager
  2014-12-18 14:07:49,673 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing job_1418391341181_0102 because: not enabled;
  2014-12-18 14:07:49,698 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job job_1418391341181_0102 = 0. Number of splits = 9
  2014-12-18 14:07:49,698 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for job job_1418391341181_0102 = 0
  2014-12-18 14:07:49,698 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1418391341181_0102Job Transitioned from NEW to INITED
  2014-12-18 14:07:49,700 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching normal, non-uberized, multi-container job job_1418391341181_0102.
  2014-12-18 14:07:49,745 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
  2014-12-18 14:07:49,756 INFO [Socket Reader #1 for port 35125] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 35125
  2014-12-18 14:07:49,784 INFO [main] org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server
  2014-12-18 14:07:49,784 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting
  2014-12-18 14:07:49,784 INFO [IPC Server listener on 35125] org.apache.hadoop.ipc.Server: IPC Server listener on 35125: starting
  2014-12-18 14:07:49,786 INFO [main] org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated MRClientService at mob1l0r0k.appdb.ngIBMD.prod.bide.de.tmo/10.99.230.58:35125
  2014-12-18 14:07:49,884 INFO [main] org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
  2014-12-18 14:07:49,890 INFO [main] org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.mapreduce is not defined
  2014-12-18 14:07:49,908 INFO [main] org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
  2014-12-18 14:07:49,917 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context mapreduce
  2014-12-18 14:07:49,917 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context static
  2014-12-18 14:07:49,923 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /mapreduce/*
  2014-12-18 14:07:49,923 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /ws/*
  2014-12-18 14:07:49,940 INFO [main] org.apache.hadoop.http.HttpServer2: Jetty bound to port 46398
  2014-12-18 14:07:49,940 INFO [main] org.mortbay.log: jetty-6.1.26
  2014-12-18 14:07:49,975 INFO [main] org.mortbay.log: Extract jar:file:/pkg/moip/mo10755/work/mzpl/cloudera/parcels/CDH-5.1.2-1.cdh5.1.2.p0.3/lib/hadoop-yarn/hadoop-yarn-common-2.3.0-cdh5.1.2.jar!/webapps/mapreduce to /tmp/Jetty_0_0_0_0_46398_mapreduce____tsfox5/webapp
  2014-12-18 14:07:50,329 INFO [main] org.mortbay.log: Started SelectChannelConnector@0.0.0.0:46398
  2014-12-18 14:07:50,330 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Web app /mapreduce started at 46398
  2014-12-18 14:07:50,776 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules
  2014-12-18 14:07:50,783 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue class java.util.concurrent.LinkedBlockingQueue
  2014-12-18 14:07:50,783 INFO [Socket Reader #1 for port 63660] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 63660
  2014-12-18 14:07:50,790 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting
  2014-12-18 14:07:50,790 INFO [IPC Server listener on 63660] org.apache.hadoop.ipc.Server: IPC Server listener on 63660: starting
  2014-12-18 14:07:50,813 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: nodeBlacklistingEnabled:true
  2014-12-18 14:07:50,813 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: maxTaskFailuresPerNode is 3
  2014-12-18 14:07:50,814 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: blacklistDisablePercent is 33
  2014-12-18 14:07:50,899 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval;  Ignoring.
  2014-12-18 14:07:50,906 WARN [main] org.apache.hadoop.conf.Configuration: job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts;  Ignoring.
  2014-12-18 14:07:50,910 INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at mob1l0r0k.appdb.ngIBMD.prod.bide.de.tmo/10.99.230.58:8030
  2014-12-18 14:07:50,989 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: maxContainerCapability: 8192
  2014-12-18 14:07:50,989 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: root.sqoop2
  2014-12-18 14:07:50,994 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper limit on the thread pool size is 500
  2014-12-18 14:07:50,997 INFO [main] org.apache.hadoop.yarn.client.api.impl.ContainerManagementProtocolProxy: yarn.client.max-nodemanagers-proxies : 500
  2014-12-18 14:07:51,005 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1418391341181_0102Job Transitioned from INITED to SETUP
  2014-12-18 14:07:51,058 INFO [CommitterEvent Processor #0] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_SETUP
  2014-12-18 14:07:51,069 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1418391341181_0102Job Transitioned from SETUP to RUNNING
  2014-12-18 14:07:51,097 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1418391341181_0102_m_000000 Task Transitioned from NEW to SCHEDULED
  2014-12-18 14:07:51,098 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1418391341181_0102_m_000001 Task Transitioned from NEW to SCHEDULED
  2014-12-18 14:07:51,098 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1418391341181_0102_m_000002 Task Transitioned from NEW to SCHEDULED
  2014-12-18 14:07:51,098 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1418391341181_0102_m_000003 Task Transitioned from NEW to SCHEDULED
  2014-12-18 14:07:51,099 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1418391341181_0102_m_000004 Task Transitioned from NEW to SCHEDULED
  2014-12-18 14:07:51,099 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1418391341181_0102_m_000005 Task Transitioned from NEW to SCHEDULED
  2014-12-18 14:07:51,099 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1418391341181_0102_m_000006 Task Transitioned from NEW to SCHEDULED
  2014-12-18 14:07:51,100 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1418391341181_0102_m_000007 Task Transitioned from NEW to SCHEDULED
  2014-12-18 14:07:51,100 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1418391341181_0102_m_000008 Task Transitioned from NEW to SCHEDULED
  2014-12-18 14:07:51,102 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1418391341181_0102_m_000000_0 TaskAttempt Transitioned from NEW to UNASSIGNED
  2014-12-18 14:07:51,102 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1418391341181_0102_m_000001_0 TaskAttempt Transitioned from NEW to UNASSIGNED
  2014-12-18 14:07:51,102 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1418391341181_0102_m_000002_0 TaskAttempt Transitioned from NEW to UNASSIGNED
  2014-12-18 14:07:51,102 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1418391341181_0102_m_000003_0 TaskAttempt Transitioned from NEW to UNASSIGNED
  2014-12-18 14:07:51,102 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1418391341181_0102_m_000004_0 TaskAttempt Transitioned from NEW to UNASSIGNED
  2014-12-18 14:07:51,103 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1418391341181_0102_m_000005_0 TaskAttempt Transitioned from NEW to UNASSIGNED
  2014-12-18 14:07:51,103 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1418391341181_0102_m_000006_0 TaskAttempt Transitioned from NEW to UNASSIGNED
  2014-12-18 14:07:51,103 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1418391341181_0102_m_000007_0 TaskAttempt Transitioned from NEW to UNASSIGNED
  2014-12-18 14:07:51,103 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1418391341181_0102_m_000008_0 TaskAttempt Transitioned from NEW to UNASSIGNED
  2014-12-18 14:07:51,105 INFO [Thread-51] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: mapResourceReqt:1024
  2014-12-18 14:07:51,151 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer setup for JobId: job_1418391341181_0102, File: hdfs://mob1l0r0k.appdb.ngIBMD.prod.bide.de.tmo:8020/mapred/sqoop2/.staging/job_1418391341181_0102/job_1418391341181_0102_1.jhist
  2014-12-18 14:07:51,993 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:9 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 HostLocal:0 RackLocal:0
  2014-12-18 14:07:52,040 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1418391341181_0102: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:0, vCores:0> knownNMs=1

 

 

stderr:

 Dec 18, 2014 2:07:52 PM com.google.inject.servlet.InternalServletModule$BackwardsCompatibleServletContextProvider get
  WARNING: You are attempting to use a deprecated API (specifically, attempting to @Inject ServletContext inside an eagerly created singleton. While we allow this for backwards compatibility, be warned that this MAY have unexpected behavior if you have more than one injector (with ServletModule) running in the same JVM. Please consult the Guice documentation at http://code.google.com/p/google-guice/wiki/Servlets for more information.
  Dec 18, 2014 2:07:52 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
  INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver as a provider class
  Dec 18, 2014 2:07:52 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
  INFO: Registering org.apache.hadoop.yarn.webapp.GenericExceptionHandler as a provider class
  Dec 18, 2014 2:07:52 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory register
  INFO: Registering org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices as a root resource class
  Dec 18, 2014 2:07:52 PM com.sun.jersey.server.impl.application.WebApplicationImpl _initiate
  INFO: Initiating Jersey application, version 'Jersey: 1.9 09/02/2011 11:17 AM'
  Dec 18, 2014 2:07:52 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
  INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.JAXBContextResolver to GuiceManagedComponentProvider with the scope "Singleton"
  Dec 18, 2014 2:07:52 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
  INFO: Binding org.apache.hadoop.yarn.webapp.GenericExceptionHandler to GuiceManagedComponentProvider with the scope "Singleton"
  Dec 18, 2014 2:07:53 PM com.sun.jersey.guice.spi.container.GuiceComponentProviderFactory getComponentProvider
  INFO: Binding org.apache.hadoop.mapreduce.v2.app.webapp.AMWebServices to GuiceManagedComponentProvider with the scope "PerRequest"
              2014-12-18 14:08:02,465 [main] INFO  org.apache.sqoop.job.mr.SqoopOutputFormatLoadExecutor  - SqoopOutputFormatLoadExecutor::SqoopRecordWriter is closed
2014-12-18 14:08:07,768 [main] INFO  org.apache.sqoop.job.etl.HdfsExportExtractor  - Start position: 107
2014-12-18 14:08:07,768 [main] INFO  org.apache.sqoop.job.etl.HdfsExportExtractor  - Extracting ended on position: 107
2014-12-18 14:08:07,768 [main] INFO  org.apache.sqoop.job.mr.SqoopMapper  - Extractor has finished
2014-12-18 14:08:07,770 [main] INFO  org.apache.sqoop.job.mr.SqoopMapper  - Stopping progress service
2014-12-18 14:08:07,775 [main] INFO  org.apache.sqoop.job.mr.SqoopOutputFormatLoadExecutor  - SqoopOutputFormatLoadExecutor::SqoopRecordWriter is about to be closed
2014-12-18 14:08:07,989 [OutputFormatLoader-consumer] INFO  org.apache.sqoop.job.mr.SqoopOutputFormatLoadExecutor  - Loader has finished
2014-12-18 14:08:07,989 [main] INFO  org.apache.sqoop.job.mr.SqoopOutputFormatLoadExecutor  - SqoopOutputFormatLoadExecutor::SqoopRecordWriter is closed

 

 

 

1 ACCEPTED SOLUTION

avatar
Rising Star

So I figured it out by myself, I had to first specify the Table column names in a Sqoop job like this:

filename,calling_number,calling_IMSI,called_number,calling_first_cell_id,calling_last_cell_id,starttime,endtime,duration,cause_for_termination,call_type

 

and then, in the data, I had to wrap all the strings by ' like this:

'WWWWWWW','EEEEEEEE','FFFFFFF','DDDDDDDDD','VVVVVVVVV','CCCCCCCCCCC','XXXXXXXXXX','YYYYYYYYYY','AAAAAAAAAAAA','RRRRRRRRRRR','SSSSSSS'

 

so probably the exporter would recognize that it is a string and not a number...

 

View solution in original post

1 REPLY 1

avatar
Rising Star

So I figured it out by myself, I had to first specify the Table column names in a Sqoop job like this:

filename,calling_number,calling_IMSI,called_number,calling_first_cell_id,calling_last_cell_id,starttime,endtime,duration,cause_for_termination,call_type

 

and then, in the data, I had to wrap all the strings by ' like this:

'WWWWWWW','EEEEEEEE','FFFFFFF','DDDDDDDDD','VVVVVVVVV','CCCCCCCCCCC','XXXXXXXXXX','YYYYYYYYYY','AAAAAAAAAAAA','RRRRRRRRRRR','SSSSSSS'

 

so probably the exporter would recognize that it is a string and not a number...