Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Oozie Sqoop actions fails when importing data from TERADATA

Solved Go to solution

Oozie Sqoop actions fails when importing data from TERADATA

Expert Contributor

I am tring to schedule a Sqoop action with Oozie to import data from Teradata database, but it fails (the same job completes correctly when using Sqoop in standalone).

 

This is the job.properties:

 

nameNode=hdfs://quickstart.cloudera:8020
jobTracker=localhost:8032
oozie.wf.application.path=${nameNode}/user/cloudera/oozie/sqoop-teradata-app
oozie.use.system.libpath=true


This is the workflow.xml:

 

workflow-app name="OOZIE_SQOOP_WF" xmlns="uri:oozie:workflow:0.4">
    
	<start to="sqoop_action" />		

	<action name="sqoop_action">
        <sqoop xmlns="uri:oozie:sqoop-action:0.2">
            <job-tracker>${jobTracker}</job-tracker>
            <name-node>${nameNode}</name-node>
            <prepare>
                <delete path="${nameNode}/user/cloudera/sqoop/MyTable"/>
            </prepare>
            <command>import --connect jdbc:teradata://172.31.7.69/DATABASE=dbname --username admin --password admin --table MyTable --split-by Id --fields-terminated-by '\t' --warehouse-dir sqoop</command>			
        </sqoop>
        <ok to="success"/>
        <error to="fail"/>
    </action>

	<kill name="fail">
		<message>JOB FAILED!</message>
	</kill>

	<end name="success"/>	
    
</workflow-app>

  

  

com.cloudera.connector.teradata.TeradataManagerFactory=/var/lib/sqoop/sqoop-connector-teradata-1.6c5.jar

 
and added the following property configuration in /etc/sqoop/conf.dist/sqoop-site.xml

 

<property>
    <name>sqoop.connection.factories</name>
    <value>com.cloudera.connector.teradata.TeradataManagerFactory</value>
</property>

  

The action fails (FAILED/KILLED1) with error message "Main class [org.apache.oozie.action.hadoop.SqoopMain], exit code [1]". This is the log, where I can't see any error or exception:

  

2018-01-11 02:35:45,388 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Created MRAppMaster for application appattempt_1515662938068_0005_000001
2018-01-11 02:35:46,099 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Executing with tokens:
2018-01-11 02:35:46,099 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: YARN_AM_RM_TOKEN, Service: , Ident: (org.apache.hadoop.yarn.security.AMRMTokenIdentifier@19a3f495)
2018-01-11 02:35:46,748 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Kind: RM_DELEGATION_TOKEN, Service: 127.0.0.1:8032, Ident: (RM_DELEGATION_TOKEN owner=cloudera, renewer=oozie mr token, realUser=oozie, issueDate=1515666939966, maxDate=1516271739966, sequenceNumber=12, masterKeyId=2)
2018-01-11 02:35:46,800 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter set in config org.apache.oozie.action.hadoop.OozieLauncherOutputCommitter
2018-01-11 02:35:46,804 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: OutputCommitter is org.apache.oozie.action.hadoop.OozieLauncherOutputCommitter
2018-01-11 02:35:48,037 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.jobhistory.EventType for class org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler
2018-01-11 02:35:48,040 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobEventDispatcher
2018-01-11 02:35:48,042 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskEventDispatcher
2018-01-11 02:35:48,044 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.TaskAttemptEventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$TaskAttemptEventDispatcher
2018-01-11 02:35:48,045 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventType for class org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler
2018-01-11 02:35:48,050 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.speculate.Speculator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$SpeculatorEventDispatcher
2018-01-11 02:35:48,051 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.rm.ContainerAllocator$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter
2018-01-11 02:35:48,054 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncher$EventType for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerLauncherRouter
2018-01-11 02:35:48,147 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://quickstart.cloudera:8020]
2018-01-11 02:35:48,214 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://quickstart.cloudera:8020]
2018-01-11 02:35:48,256 INFO [main] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://quickstart.cloudera:8020]
2018-01-11 02:35:48,277 INFO [main] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Emitting job history data to the timeline server is not enabled
2018-01-11 02:35:48,371 INFO [main] org.apache.hadoop.yarn.event.AsyncDispatcher: Registering class org.apache.hadoop.mapreduce.v2.app.job.event.JobFinishEvent$Type for class org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler
2018-01-11 02:35:49,013 WARN [main] org.apache.hadoop.metrics2.impl.MetricsConfig: Cannot locate configuration: tried hadoop-metrics2-mrappmaster.properties,hadoop-metrics2.properties
2018-01-11 02:35:49,152 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 10 second(s).
2018-01-11 02:35:49,152 INFO [main] org.apache.hadoop.metrics2.impl.MetricsSystemImpl: MRAppMaster metrics system started
2018-01-11 02:35:49,175 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Adding job token for job_1515662938068_0005 to jobTokenSecretManager
2018-01-11 02:35:49,477 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Not uberizing job_1515662938068_0005 because: not enabled;
2018-01-11 02:35:49,522 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Input size for job job_1515662938068_0005 = 0. Number of splits = 1
2018-01-11 02:35:49,522 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Number of reduces for job job_1515662938068_0005 = 0
2018-01-11 02:35:49,522 INFO [main] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1515662938068_0005Job Transitioned from NEW to INITED
2018-01-11 02:35:49,524 INFO [main] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: MRAppMaster launching normal, non-uberized, multi-container job job_1515662938068_0005.
2018-01-11 02:35:49,582 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 100
2018-01-11 02:35:49,602 INFO [Socket Reader #1 for port 55025] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 55025
2018-01-11 02:35:49,703 INFO [main] org.apache.hadoop.yarn.factories.impl.pb.RpcServerFactoryPBImpl: Adding protocol org.apache.hadoop.mapreduce.v2.api.MRClientProtocolPB to the server
2018-01-11 02:35:49,704 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2018-01-11 02:35:49,707 INFO [IPC Server listener on 55025] org.apache.hadoop.ipc.Server: IPC Server listener on 55025: starting
2018-01-11 02:35:49,713 INFO [main] org.apache.hadoop.mapreduce.v2.app.client.MRClientService: Instantiated MRClientService at quickstart.cloudera/127.0.0.1:55025
2018-01-11 02:35:49,886 INFO [main] org.mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
2018-01-11 02:35:49,960 INFO [main] org.apache.hadoop.security.authentication.server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
2018-01-11 02:35:49,968 INFO [main] org.apache.hadoop.http.HttpRequestLog: Http request log for http.requests.mapreduce is not defined
2018-01-11 02:35:49,995 INFO [main] org.apache.hadoop.http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
2018-01-11 02:35:50,007 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context mapreduce
2018-01-11 02:35:50,007 INFO [main] org.apache.hadoop.http.HttpServer2: Added filter AM_PROXY_FILTER (class=org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter) to context static
2018-01-11 02:35:50,066 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /mapreduce/*
2018-01-11 02:35:50,066 INFO [main] org.apache.hadoop.http.HttpServer2: adding path spec: /ws/*
2018-01-11 02:35:50,085 INFO [main] org.apache.hadoop.http.HttpServer2: Jetty bound to port 52908
2018-01-11 02:35:50,085 INFO [main] org.mortbay.log: jetty-6.1.26.cloudera.4
2018-01-11 02:35:50,136 INFO [main] org.mortbay.log: Extract jar:file:/usr/lib/hadoop-yarn/hadoop-yarn-common-2.6.0-cdh5.12.0.jar!/webapps/mapreduce to ./tmp/Jetty_0_0_0_0_52908_mapreduce____3vp14l/webapp
2018-01-11 02:35:50,569 INFO [main] org.mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:52908
2018-01-11 02:35:50,569 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Web app /mapreduce started at 52908
2018-01-11 02:35:51,415 INFO [main] org.apache.hadoop.yarn.webapp.WebApps: Registered webapp guice modules
2018-01-11 02:35:51,423 INFO [main] org.apache.hadoop.ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 3000
2018-01-11 02:35:51,424 INFO [Socket Reader #1 for port 45459] org.apache.hadoop.ipc.Server: Starting Socket Reader #1 for port 45459
2018-01-11 02:35:51,440 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: IPC Server Responder: starting
2018-01-11 02:35:51,441 INFO [IPC Server listener on 45459] org.apache.hadoop.ipc.Server: IPC Server listener on 45459: starting
2018-01-11 02:35:51,492 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: nodeBlacklistingEnabled:true
2018-01-11 02:35:51,492 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: maxTaskFailuresPerNode is 3
2018-01-11 02:35:51,492 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: blacklistDisablePercent is 33
2018-01-11 02:35:51,557 INFO [main] org.apache.hadoop.yarn.client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8030
2018-01-11 02:35:51,702 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: maxContainerCapability: <memory:8192, vCores:4>
2018-01-11 02:35:51,702 INFO [main] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: queue: root.cloudera
2018-01-11 02:35:51,707 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Upper limit on the thread pool size is 500
2018-01-11 02:35:51,707 INFO [main] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: The thread pool initial size is 10
2018-01-11 02:35:51,727 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1515662938068_0005Job Transitioned from INITED to SETUP
2018-01-11 02:35:51,736 INFO [CommitterEvent Processor #0] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_SETUP
2018-01-11 02:35:51,738 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1515662938068_0005Job Transitioned from SETUP to RUNNING
2018-01-11 02:35:51,805 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1515662938068_0005_m_000000 Task Transitioned from NEW to SCHEDULED
2018-01-11 02:35:51,811 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1515662938068_0005_m_000000_0 TaskAttempt Transitioned from NEW to UNASSIGNED
2018-01-11 02:35:51,813 INFO [Thread-53] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: mapResourceRequest:<memory:1024, vCores:1>
2018-01-11 02:35:51,867 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Event Writer setup for JobId: job_1515662938068_0005, File: hdfs://quickstart.cloudera:8020/tmp/hadoop-yarn/staging/cloudera/.staging/job_1515662938068_0005/job_1515662938068_0005_1.jhist
2018-01-11 02:35:52,551 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.v2.jobhistory.JobHistoryUtils: Default file system [hdfs://quickstart.cloudera:8020]
2018-01-11 02:35:52,705 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Before Scheduling: PendingReds:0 ScheduledMaps:1 ScheduledReds:0 AssignedMaps:0 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:0 ContRel:0 HostLocal:0 RackLocal:0
2018-01-11 02:35:52,776 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1515662938068_0005: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:6144, vCores:7> knownNMs=1
2018-01-11 02:35:54,798 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Got allocated containers 1
2018-01-11 02:35:54,802 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Assigned container container_1515662938068_0005_01_000002 to attempt_1515662938068_0005_m_000000_0
2018-01-11 02:35:54,803 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: After Scheduling: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:0
2018-01-11 02:35:54,886 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Job jar is not present. Not adding any jar to the list of resources.
2018-01-11 02:35:54,913 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: The job-conf file on the remote FS is /tmp/hadoop-yarn/staging/cloudera/.staging/job_1515662938068_0005/job.xml
2018-01-11 02:35:55,137 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Adding #1 tokens and #1 secret keys for NM use for launching container
2018-01-11 02:35:55,137 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Size of containertokens_dob is 2
2018-01-11 02:35:55,137 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: Putting shuffle token in serviceData
2018-01-11 02:35:55,648 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapred.JobConf: Task java-opts do not specify heap size. Setting task attempt jvm max heap size to -Xmx820m
2018-01-11 02:35:55,672 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1515662938068_0005_m_000000_0 TaskAttempt Transitioned from UNASSIGNED to ASSIGNED
2018-01-11 02:35:55,682 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Processing the event EventType: CONTAINER_REMOTE_LAUNCH for container container_1515662938068_0005_01_000002 taskAttempt attempt_1515662938068_0005_m_000000_0
2018-01-11 02:35:55,690 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Launching attempt_1515662938068_0005_m_000000_0
2018-01-11 02:35:55,830 INFO [RMCommunicator Allocator] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerRequestor: getResources() for application_1515662938068_0005: ask=1 release= 0 newContainers=0 finishedContainers=0 resourcelimit=<memory:5120, vCores:6> knownNMs=1
2018-01-11 02:35:56,004 INFO [ContainerLauncher #0] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: Shuffle port returned by ContainerManager for attempt_1515662938068_0005_m_000000_0 : 13562
2018-01-11 02:35:56,006 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: TaskAttempt: [attempt_1515662938068_0005_m_000000_0] using containerId: [container_1515662938068_0005_01_000002 on NM: [quickstart.cloudera:34543]
2018-01-11 02:35:56,011 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1515662938068_0005_m_000000_0 TaskAttempt Transitioned from ASSIGNED to RUNNING
2018-01-11 02:35:56,011 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1515662938068_0005_m_000000 Task Transitioned from SCHEDULED to RUNNING
2018-01-11 02:35:59,461 INFO [Socket Reader #1 for port 45459] SecurityLogger.org.apache.hadoop.ipc.Server: Auth successful for job_1515662938068_0005 (auth:SIMPLE)
2018-01-11 02:35:59,495 INFO [IPC Server handler 1 on 45459] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID : jvm_1515662938068_0005_m_000002 asked for a task
2018-01-11 02:35:59,496 INFO [IPC Server handler 1 on 45459] org.apache.hadoop.mapred.TaskAttemptListenerImpl: JVM with ID: jvm_1515662938068_0005_m_000002 given task: attempt_1515662938068_0005_m_000000_0
2018-01-11 02:36:03,205 INFO [IPC Server handler 16 on 45459] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1515662938068_0005_m_000000_0 is : 0.0
2018-01-11 02:36:03,341 INFO [IPC Server handler 17 on 45459] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Progress of TaskAttempt attempt_1515662938068_0005_m_000000_0 is : 1.0
2018-01-11 02:36:03,359 INFO [IPC Server handler 13 on 45459] org.apache.hadoop.mapred.TaskAttemptListenerImpl: Done acknowledgement from attempt_1515662938068_0005_m_000000_0
2018-01-11 02:36:03,365 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1515662938068_0005_m_000000_0 TaskAttempt Transitioned from RUNNING to SUCCESS_FINISHING_CONTAINER
2018-01-11 02:36:03,402 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: Task succeeded with attempt attempt_1515662938068_0005_m_000000_0
2018-01-11 02:36:03,404 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl: task_1515662938068_0005_m_000000 Task Transitioned from RUNNING to SUCCEEDED
2018-01-11 02:36:03,409 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Num completed Tasks: 1
2018-01-11 02:36:03,410 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1515662938068_0005Job Transitioned from RUNNING to COMMITTING
2018-01-11 02:36:03,411 INFO [CommitterEvent Processor #1] org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler: Processing the event EventType: JOB_COMMIT
2018-01-11 02:36:03,443 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: Calling handler for JobFinishedEvent 
2018-01-11 02:36:03,444 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl: job_1515662938068_0005Job Transitioned from COMMITTING to SUCCEEDED
2018-01-11 02:36:03,452 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: We are finishing cleanly so this is the last retry
2018-01-11 02:36:03,452 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify RMCommunicator isAMLastRetry: true
2018-01-11 02:36:03,452 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: RMCommunicator notified that shouldUnregistered is: true
2018-01-11 02:36:03,452 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Notify JHEH isAMLastRetry: true
2018-01-11 02:36:03,452 INFO [Thread-68] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: JobHistoryEventHandler notified that forceJobCompletion is true
2018-01-11 02:36:03,452 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Calling stop for all the services
2018-01-11 02:36:03,454 INFO [Thread-68] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopping JobHistoryEventHandler. Size of the outstanding queue size is 0
2018-01-11 02:36:03,581 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://quickstart.cloudera:8020/tmp/hadoop-yarn/staging/cloudera/.staging/job_1515662938068_0005/job_1515662938068_0005_1.jhist to hdfs://quickstart.cloudera:8020/tmp/hadoop-yarn/staging/history/done_intermediate/cloudera/job_1515662938068_0005-1515666941120-cloudera-oozie%3Alauncher%3AT%3Dsqoop%3AW%3DOOZIE_SQOOP_WF%3AA%3Dsqoop_ac-1515666963441-1-0-SUCCEEDED-root.cloudera-1515666951717.jhist_tmp
2018-01-11 02:36:03,677 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://quickstart.cloudera:8020/tmp/hadoop-yarn/staging/history/done_intermediate/cloudera/job_1515662938068_0005-1515666941120-cloudera-oozie%3Alauncher%3AT%3Dsqoop%3AW%3DOOZIE_SQOOP_WF%3AA%3Dsqoop_ac-1515666963441-1-0-SUCCEEDED-root.cloudera-1515666951717.jhist_tmp
2018-01-11 02:36:03,683 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copying hdfs://quickstart.cloudera:8020/tmp/hadoop-yarn/staging/cloudera/.staging/job_1515662938068_0005/job_1515662938068_0005_1_conf.xml to hdfs://quickstart.cloudera:8020/tmp/hadoop-yarn/staging/history/done_intermediate/cloudera/job_1515662938068_0005_conf.xml_tmp
2018-01-11 02:36:03,722 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Copied to done location: hdfs://quickstart.cloudera:8020/tmp/hadoop-yarn/staging/history/done_intermediate/cloudera/job_1515662938068_0005_conf.xml_tmp
2018-01-11 02:36:03,732 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://quickstart.cloudera:8020/tmp/hadoop-yarn/staging/history/done_intermediate/cloudera/job_1515662938068_0005.summary_tmp to hdfs://quickstart.cloudera:8020/tmp/hadoop-yarn/staging/history/done_intermediate/cloudera/job_1515662938068_0005.summary
2018-01-11 02:36:03,735 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://quickstart.cloudera:8020/tmp/hadoop-yarn/staging/history/done_intermediate/cloudera/job_1515662938068_0005_conf.xml_tmp to hdfs://quickstart.cloudera:8020/tmp/hadoop-yarn/staging/history/done_intermediate/cloudera/job_1515662938068_0005_conf.xml
2018-01-11 02:36:03,739 INFO [eventHandlingThread] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Moved tmp to done: hdfs://quickstart.cloudera:8020/tmp/hadoop-yarn/staging/history/done_intermediate/cloudera/job_1515662938068_0005-1515666941120-cloudera-oozie%3Alauncher%3AT%3Dsqoop%3AW%3DOOZIE_SQOOP_WF%3AA%3Dsqoop_ac-1515666963441-1-0-SUCCEEDED-root.cloudera-1515666951717.jhist_tmp to hdfs://quickstart.cloudera:8020/tmp/hadoop-yarn/staging/history/done_intermediate/cloudera/job_1515662938068_0005-1515666941120-cloudera-oozie%3Alauncher%3AT%3Dsqoop%3AW%3DOOZIE_SQOOP_WF%3AA%3Dsqoop_ac-1515666963441-1-0-SUCCEEDED-root.cloudera-1515666951717.jhist
2018-01-11 02:36:03,742 INFO [Thread-68] org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler: Stopped JobHistoryEventHandler. super.stop()
2018-01-11 02:36:03,743 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.launcher.ContainerLauncherImpl: KILLING attempt_1515662938068_0005_m_000000_0
2018-01-11 02:36:03,773 INFO [AsyncDispatcher event handler] org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl: attempt_1515662938068_0005_m_000000_0 TaskAttempt Transitioned from SUCCESS_FINISHING_CONTAINER to SUCCEEDED
2018-01-11 02:36:03,782 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Setting job diagnostics to 
2018-01-11 02:36:03,783 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: History url is http://quickstart.cloudera:19888/jobhistory/job/job_1515662938068_0005
2018-01-11 02:36:03,797 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Waiting for application to be successfully unregistered.
2018-01-11 02:36:04,800 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:1 AssignedReds:0 CompletedMaps:0 CompletedReds:0 ContAlloc:1 ContRel:0 HostLocal:0 RackLocal:0
2018-01-11 02:36:04,802 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Deleting staging directory hdfs://quickstart.cloudera:8020 /tmp/hadoop-yarn/staging/cloudera/.staging/job_1515662938068_0005
2018-01-11 02:36:04,813 INFO [Thread-68] org.apache.hadoop.ipc.Server: Stopping server on 45459
2018-01-11 02:36:04,819 INFO [IPC Server listener on 45459] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 45459
2018-01-11 02:36:04,820 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2018-01-11 02:36:04,822 INFO [TaskHeartbeatHandler PingChecker] org.apache.hadoop.mapreduce.v2.app.TaskHeartbeatHandler: TaskHeartbeatHandler thread interrupted
2018-01-11 02:36:04,828 INFO [Ping Checker] org.apache.hadoop.yarn.util.AbstractLivelinessMonitor: TaskAttemptFinishingMonitor thread interrupted
2018-01-11 02:36:04,841 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Job end notification started for jobID : job_1515662938068_0005
2018-01-11 02:36:04,843 INFO [Thread-68] org.mortbay.log: Job end notification attempts left 0
2018-01-11 02:36:04,843 INFO [Thread-68] org.mortbay.log: Job end notification trying http://quickstart.cloudera:11000/oozie/callback?id=0000002-180111013033300-oozie-oozi-W@sqoop_action&status=SUCCEEDED
2018-01-11 02:36:04,863 INFO [Thread-68] org.mortbay.log: Job end notification to http://quickstart.cloudera:11000/oozie/callback?id=0000002-180111013033300-oozie-oozi-W@sqoop_action&status=SUCCEEDED succeeded
2018-01-11 02:36:04,863 INFO [Thread-68] org.mortbay.log: Job end notification succeeded for job_1515662938068_0005
2018-01-11 02:36:09,868 INFO [Thread-68] org.apache.hadoop.ipc.Server: Stopping server on 55025
2018-01-11 02:36:09,959 INFO [IPC Server listener on 55025] org.apache.hadoop.ipc.Server: Stopping IPC Server listener on 55025
2018-01-11 02:36:10,048 INFO [IPC Server Responder] org.apache.hadoop.ipc.Server: Stopping IPC Server Responder
2018-01-11 02:36:10,317 INFO [Thread-68] org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:0

 
What am I doing wrong?

1 ACCEPTED SOLUTION

Accepted Solutions
Highlighted

Re: Oozie Sqoop actions fails when importing data from TERADATA

Expert Contributor

 

 

Finally solved by adding the connection manager parameter and the Teradata jdbc drivers in the path:

 

sqoop import \
--connection-manager com.teradata.jdbc.TeraDriver \
--connect jdbc:teradata://host/DATABASE=db \
[...]


The following files should be available to Oozie (I've put them in the application lib folder):

  • sqoop-connector-teradata-1.6c5.jar
  • tdgssconfig.jar
  • terajdbc4.jar

 

I guess the documentation should be updated, since with the informations provided the import from Teradata works from Sqoop standalone but fails always inside Oozie.

 

 

1 REPLY 1
Highlighted

Re: Oozie Sqoop actions fails when importing data from TERADATA

Expert Contributor

 

 

Finally solved by adding the connection manager parameter and the Teradata jdbc drivers in the path:

 

sqoop import \
--connection-manager com.teradata.jdbc.TeraDriver \
--connect jdbc:teradata://host/DATABASE=db \
[...]


The following files should be available to Oozie (I've put them in the application lib folder):

  • sqoop-connector-teradata-1.6c5.jar
  • tdgssconfig.jar
  • terajdbc4.jar

 

I guess the documentation should be updated, since with the informations provided the import from Teradata works from Sqoop standalone but fails always inside Oozie.

 

 

Don't have an account?
Coming from Hortonworks? Activate your account here