Member since
04-20-2021
4
Posts
0
Kudos Received
0
Solutions
04-30-2021
12:19 AM
Hello : )
I created workflow that import sybase table and make Hive External table.
In this workflow, first, I put sqoop job and connect hive job.
I ran W/F, oozie make another sqoop job, and it's success.
But, after processing oozie job(sqoop node), this job is failed.
I added that log.
==============================================================
<<< Invocation of Main class completed <<< Oozie Launcher, uploading action data to HDFS sequence file: hdfs://cluster/user/hlibatch/oozie-oozi/0002066-210129150521242-oozie-oozi-W/sqoop-a1a6--sqoop/action-data.seq 2021-04-30 15:17:43,936 [uber-SubtaskRunner] INFO org.apache.hadoop.io.compress.CodecPool - Got brand-new compressor [.deflate] Successfully reset security manager from org.apache.oozie.action.hadoop.LauncherSecurityManager@142ad372 to null Oozie Launcher ends 2021-04-30 15:17:43,968 [uber-SubtaskRunner] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1611900270908_21441_m_000000_0 is : 1.0 2021-04-30 15:17:43,968 [uber-SubtaskRunner] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1611900270908_21441_m_000000_0 is : 1.0 2021-04-30 15:17:43,969 [uber-SubtaskRunner] INFO org.apache.hadoop.mapred.Task - Task:attempt_1611900270908_21441_m_000000_0 is done. And is in the process of committing 2021-04-30 15:17:43,969 [uber-SubtaskRunner] INFO org.apache.hadoop.mapred.Task - Task:attempt_1611900270908_21441_m_000000_0 is done. And is in the process of committing 2021-04-30 15:17:44,190 [uber-SubtaskRunner] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1611900270908_21441_m_000000_0 is : 1.0 2021-04-30 15:17:44,190 [uber-SubtaskRunner] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Progress of TaskAttempt attempt_1611900270908_21441_m_000000_0 is : 1.0 2021-04-30 15:17:44,191 [uber-SubtaskRunner] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Done acknowledgement from attempt_1611900270908_21441_m_000000_0 2021-04-30 15:17:44,191 [uber-SubtaskRunner] INFO org.apache.hadoop.mapred.TaskAttemptListenerImpl - Done acknowledgement from attempt_1611900270908_21441_m_000000_0 2021-04-30 15:17:44,191 [uber-SubtaskRunner] INFO org.apache.hadoop.mapred.Task - Task 'attempt_1611900270908_21441_m_000000_0' done. 2021-04-30 15:17:44,191 [uber-SubtaskRunner] INFO org.apache.hadoop.mapred.Task - Task 'attempt_1611900270908_21441_m_000000_0' done. 2021-04-30 15:17:44,195 [AsyncDispatcher event handler] INFO org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl - attempt_1611900270908_21441_m_000000_0 TaskAttempt Transitioned from RUNNING to SUCCESS_FINISHING_CONTAINER 2021-04-30 15:17:44,203 [AsyncDispatcher event handler] INFO org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl - Task succeeded with attempt attempt_1611900270908_21441_m_000000_0 2021-04-30 15:17:44,216 [AsyncDispatcher event handler] INFO org.apache.hadoop.mapreduce.v2.app.job.impl.TaskImpl - task_1611900270908_21441_m_000000 Task Transitioned from RUNNING to SUCCEEDED 2021-04-30 15:17:44,216 [AsyncDispatcher event handler] INFO org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl - Num completed Tasks: 1 2021-04-30 15:17:44,221 [AsyncDispatcher event handler] INFO org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl - job_1611900270908_21441Job Transitioned from RUNNING to COMMITTING 2021-04-30 15:17:44,222 [CommitterEvent Processor #1] INFO org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler - Processing the event EventType: JOB_COMMIT 2021-04-30 15:17:44,223 [CommitterEvent Processor #1] ERROR org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler - Could not commit job java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:893) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1786) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1728) at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:438) at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:434) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:434) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:375) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:926) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:907) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:804) at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.touchz(CommitterEventHandler.java:268) at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:282) at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:237) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2021-04-30 15:17:44,226 [CommitterEvent Processor #1] ERROR org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler - could not create failure file. java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:893) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1786) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1728) at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:438) at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:434) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:434) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:375) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:926) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:907) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:804) at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.touchz(CommitterEventHandler.java:268) at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:292) at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:237) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748) 2021-04-30 15:17:44,230 [AsyncDispatcher event handler] INFO org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl - job_1611900270908_21441Job Transitioned from COMMITTING to FAIL_ABORT 2021-04-30 15:17:44,230 [CommitterEvent Processor #2] INFO org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler - Processing the event EventType: JOB_ABORT 2021-04-30 15:17:44,241 [AsyncDispatcher event handler] INFO org.apache.hadoop.mapreduce.v2.app.job.impl.JobImpl - job_1611900270908_21441Job Transitioned from FAIL_ABORT to FAILED 2021-04-30 15:17:44,242 [Thread-748] INFO org.apache.hadoop.mapreduce.v2.app.MRAppMaster - We are finishing cleanly so this is the last retry 2021-04-30 15:17:44,243 [Thread-748] INFO org.apache.hadoop.mapreduce.v2.app.MRAppMaster - Notify RMCommunicator isAMLastRetry: true 2021-04-30 15:17:44,244 [Thread-748] INFO org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator - RMCommunicator notified that shouldUnregistered is: true 2021-04-30 15:17:44,244 [Thread-748] INFO org.apache.hadoop.mapreduce.v2.app.MRAppMaster - Notify JHEH isAMLastRetry: true 2021-04-30 15:17:44,244 [Thread-748] INFO org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler - JobHistoryEventHandler notified that forceJobCompletion is true 2021-04-30 15:17:44,244 [Thread-748] INFO org.apache.hadoop.mapreduce.v2.app.MRAppMaster - Calling stop for all the services 2021-04-30 15:17:44,246 [Thread-748] INFO org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler - Stopping JobHistoryEventHandler. Size of the outstanding queue size is 2 2021-04-30 15:17:44,268 [Thread-748] INFO org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler - In stop, writing event TASK_FINISHED 2021-04-30 15:17:44,273 [Thread-748] INFO org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler - In stop, writing event JOB_FAILED 2021-04-30 15:17:44,274 [uber-SubtaskRunner] INFO org.apache.hadoop.mapred.LocalContainerLauncher - removed attempt attempt_1611900270908_21441_m_000000_0 from the futures to keep track of 2021-04-30 15:17:44,274 [uber-SubtaskRunner] INFO org.apache.hadoop.mapred.LocalContainerLauncher - removed attempt attempt_1611900270908_21441_m_000000_0 from the futures to keep track of 2021-04-30 15:17:44,274 [AsyncDispatcher event handler] INFO org.apache.hadoop.mapreduce.v2.app.job.impl.TaskAttemptImpl - attempt_1611900270908_21441_m_000000_0 TaskAttempt Transitioned from SUCCESS_FINISHING_CONTAINER to SUCCEEDED 2021-04-30 15:17:44,274 [uber-EventHandler] INFO org.apache.hadoop.mapred.LocalContainerLauncher - Processing the event EventType: CONTAINER_COMPLETED for container container_e11_1611900270908_21441_01_000001 taskAttempt attempt_1611900270908_21441_m_000000_0 2021-04-30 15:17:44,274 [uber-EventHandler] INFO org.apache.hadoop.mapred.LocalContainerLauncher - Processing the event EventType: CONTAINER_COMPLETED for container container_e11_1611900270908_21441_01_000001 taskAttempt attempt_1611900270908_21441_m_000000_0 2021-04-30 15:17:44,333 [Thread-748] INFO org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler - Copying job_1611900270908_21441/job_1611900270908_21441_1.jhist">hdfs://cluster:8020/user/hlibatch/.staging/job_1611900270908_21441/job_1611900270908_21441_1.jhist to job_1611900270908_21441-1619749451956-hlibatch-oozie%253Alauncher%253AT%253Dsqoop%253AW%253Dhd_m_sacha_ht_tqmfpmrs_te-1619763464232-1-0-FAILED-root.users.hlibatch-1619749477162.jhist_tmp">hdfs://cluster:8020/user/history/done_intermediate/hlibatch/job_1611900270908_21441-1619749451956-hlibatch-oozie%3Alauncher%3AT%3Dsqoop%3AW%3Dhd_m_sacha_ht_tqmfpmrs_te-1619763464232-1-0-FAILED-root.users.hlibatch-1619749477162.jhist_tmp 2021-04-30 15:17:44,362 [Thread-748] INFO org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler - Copied to done location: job_1611900270908_21441-1619749451956-hlibatch-oozie%253Alauncher%253AT%253Dsqoop%253AW%253Dhd_m_sacha_ht_tqmfpmrs_te-1619763464232-1-0-FAILED-root.users.hlibatch-1619749477162.jhist_tmp">hdfs://cluster:8020/user/history/done_intermediate/hlibatch/job_1611900270908_21441-1619749451956-hlibatch-oozie%3Alauncher%3AT%3Dsqoop%3AW%3Dhd_m_sacha_ht_tqmfpmrs_te-1619763464232-1-0-FAILED-root.users.hlibatch-1619749477162.jhist_tmp 2021-04-30 15:17:44,364 [Thread-748] INFO org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler - Copying job_1611900270908_21441/job_1611900270908_21441_1_conf.xml">hdfs://cluster:8020/user/hlibatch/.staging/job_1611900270908_21441/job_1611900270908_21441_1_conf.xml to job_1611900270908_21441_conf.xml_tmp">hdfs://cluster:8020/user/history/done_intermediate/hlibatch/job_1611900270908_21441_conf.xml_tmp 2021-04-30 15:17:44,419 [Thread-748] INFO org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler - Copied to done location: job_1611900270908_21441_conf.xml_tmp">hdfs://cluster:8020/user/history/done_intermediate/hlibatch/job_1611900270908_21441_conf.xml_tmp 2021-04-30 15:17:44,427 [Thread-748] INFO org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler - Moved tmp to done: job_1611900270908_21441.summary_tmp">hdfs://cluster:8020/user/history/done_intermediate/hlibatch/job_1611900270908_21441.summary_tmp to job_1611900270908_21441.summary">hdfs://cluster:8020/user/history/done_intermediate/hlibatch/job_1611900270908_21441.summary 2021-04-30 15:17:44,429 [Thread-748] INFO org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler - Moved tmp to done: job_1611900270908_21441_conf.xml_tmp">hdfs://cluster:8020/user/history/done_intermediate/hlibatch/job_1611900270908_21441_conf.xml_tmp to job_1611900270908_21441_conf.xml">hdfs://cluster:8020/user/history/done_intermediate/hlibatch/job_1611900270908_21441_conf.xml 2021-04-30 15:17:44,431 [Thread-748] INFO org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler - Moved tmp to done: job_1611900270908_21441-1619749451956-hlibatch-oozie%253Alauncher%253AT%253Dsqoop%253AW%253Dhd_m_sacha_ht_tqmfpmrs_te-1619763464232-1-0-FAILED-root.users.hlibatch-1619749477162.jhist_tmp">hdfs://cluster:8020/user/history/done_intermediate/hlibatch/job_1611900270908_21441-1619749451956-hlibatch-oozie%3Alauncher%3AT%3Dsqoop%3AW%3Dhd_m_sacha_ht_tqmfpmrs_te-1619763464232-1-0-FAILED-root.users.hlibatch-1619749477162.jhist_tmp to job_1611900270908_21441-1619749451956-hlibatch-oozie%253Alauncher%253AT%253Dsqoop%253AW%253Dhd_m_sacha_ht_tqmfpmrs_te-1619763464232-1-0-FAILED-root.users.hlibatch-1619749477162.jhist">hdfs://cluster:8020/user/history/done_intermediate/hlibatch/job_1611900270908_21441-1619749451956-hlibatch-oozie%3Alauncher%3AT%3Dsqoop%3AW%3Dhd_m_sacha_ht_tqmfpmrs_te-1619763464232-1-0-FAILED-root.users.hlibatch-1619749477162.jhist 2021-04-30 15:17:44,431 [Thread-748] INFO org.apache.hadoop.mapreduce.jobhistory.JobHistoryEventHandler - Stopped JobHistoryEventHandler. super.stop() 2021-04-30 15:17:44,432 [uber-EventHandler] ERROR org.apache.hadoop.mapred.LocalContainerLauncher - Returning, interrupted : java.lang.InterruptedException 2021-04-30 15:17:44,432 [uber-EventHandler] ERROR org.apache.hadoop.mapred.LocalContainerLauncher - Returning, interrupted : java.lang.InterruptedException 2021-04-30 15:17:44,435 [Thread-748] INFO org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator - Setting job diagnostics to Job commit failed: java.io.IOException: Filesystem closed at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:893) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1786) at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1728) at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:438) at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:434) at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:434) at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:375) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:926) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:907) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:804) at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.touchz(CommitterEventHandler.java:268) at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.handleJobCommit(CommitterEventHandler.java:282) at org.apache.hadoop.mapreduce.v2.app.commit.CommitterEventHandler$EventProcessor.run(CommitterEventHandler.java:237) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
... View more
Labels: