Created 04-25-2018 09:11 PM
It has happened second time today after yesterday. I restart Hive service and it started working but what's the issue here?
My hive query fails with the error:
java.io.InterruptedIOException: Interrupted while waiting for data to be acknowledged by pipeline
INFO : In order to change the average load for a reducer (in bytes):
INFO : set mapreduce.job.reduces=<number> INFO : set hive.exec.reducers.bytes.per.reducer=<number> INFO : In order to limit the maximum number of reducers: INFO : set hive.exec.reducers.max=<number> INFO : In order to set a constant number of reducers: INFO : set mapreduce.job.reduces=<number> ERROR : Execution failed with exit status: 1 ERROR : Obtaining error information ERROR : Task failed! Task ID: Stage-20 Logs: ERROR : <a href="http://ec2-35-154-150-43.ap-south-1.compute.amazonaws.com:8888/filebrowser/view=/var/log/hive/hadoop-cmf-CD-HIVE-XCVXskZf-HIVESERVER2-ip-172-31-4-192.ap-south-1.compute.internal.log.out">/var/log/hive/hadoop-cmf-CD-HIVE-XCVXskZf-HIVESERVER2-ip-172-31-4-192.ap-south-1.compute.internal.log.out</a> ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask WARN : Shutting down task : Stage-1:MAPRED WARN : Shutting down task : Stage-7:MAPRED WARN : Shutting down task : Stage-11:MAPRED INFO : Completed executing command(queryId=hive_20180425102525_88c62c1c-a506-4756-9ee4-87f218852e45); Time taken: 0.188 seconds INFO : Cleaning up the staging area <a href="http://ec2-35-154-150-43.ap-south-1.compute.amazonaws.com:8888/filebrowser/view=/user/hue/.staging/%3Ca%20href=">job_1524082403477_35313</a>" target="_blank">/user/hue/.staging/<a href="http://ec2-35-154-150-43.ap-south-1.compute.amazonaws.com:8888/jobbrowser/jobs/job_1524082403477_35313">job_1524082403477_35313</a> INFO : Cleaning up the staging area <a href="http://ec2-35-154-150-43.ap-south-1.compute.amazonaws.com:8888/filebrowser/view=/user/hue/.staging/%3Ca%20href=">job_1524082403477_35312</a>" target="_blank">/user/hue/.staging/<a href="http://ec2-35-154-150-43.ap-south-1.compute.amazonaws.com:8888/jobbrowser/jobs/job_1524082403477_35312">job_1524082403477_35312</a> INFO : Cleaning up the staging area <a href="http://ec2-35-154-150-43.ap-south-1.compute.amazonaws.com:8888/filebrowser/view=/user/hue/.staging/%3Ca%20href=">job_1524082403477_35314</a>" target="_blank">/user/hue/.staging/<a href="http://ec2-35-154-150-43.ap-south-1.compute.amazonaws.com:8888/jobbrowser/jobs/job_1524082403477_35314">job_1524082403477_35314</a> ERROR : Job Submission failed with exception 'java.io.InterruptedIOException(Interrupted while waiting for data to be acknowledged by pipeline)' java.io.InterruptedIOException: Interrupted while waiting for data to be acknowledged by pipeline at org.apache.hadoop.hdfs.DFSOutputStream.waitForAckedSeqno(DFSOutputStream.java:2520) at org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:2498) at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2662) at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2621) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:369) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:341) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292) at org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:203) at org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:128) at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:578) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:573) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:573) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:564) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:418) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:142) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:80) ERROR : Job Submission failed with exception 'java.io.InterruptedIOException(Interrupted while waiting for data to be acknowledged by pipeline)' java.io.InterruptedIOException: Interrupted while waiting for data to be acknowledged by pipeline at org.apache.hadoop.hdfs.DFSOutputStream.waitForAckedSeqno(DFSOutputStream.java:2520) at org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:2498) at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2662) at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2621) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:369) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:341) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292) at org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:203) at org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:128) at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:578) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:573) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:573) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:564) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:418) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:142) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:80) ERROR : Job Submission failed with exception 'java.io.InterruptedIOException(Interrupted while waiting for data to be acknowledged by pipeline)' java.io.InterruptedIOException: Interrupted while waiting for data to be acknowledged by pipeline at org.apache.hadoop.hdfs.DFSOutputStream.waitForAckedSeqno(DFSOutputStream.java:2520) at org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:2498) at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2662) at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2621) at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72) at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61) at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:369) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:341) at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292) at org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:203) at org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:128) at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:578) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:573) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:573) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:564) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:418) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:142) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100) at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:80) Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask (state=08S01,code=1) Closing: 0: jdbc:hive2://ip-172-31-4-192.ap-south-1.compute.internal:10000/default Intercepting System.exit(2) Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.Hive2Main], exit code [2]
Created 05-04-2018 04:55 AM
Anyone here to help out with the issue?