Member since
05-16-2016
270
Posts
18
Kudos Received
4
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1702 | 07-23-2016 11:36 AM | |
3023 | 07-23-2016 11:35 AM | |
1547 | 06-05-2016 10:41 AM | |
1141 | 06-05-2016 10:37 AM |
05-04-2018
04:55 AM
Anyone here to help out with the issue?
... View more
04-30-2018
08:59 AM
That's right . Makes sense to do that but your answer does not address the issue I have. I would like to move SNN to a different mode.
... View more
04-30-2018
07:29 AM
Currently, in our production cluster, Namenode and Secondary Namenode are on the same host (master node). I believe it is advisable to have SNN on a different node. What steps can I follow to safely move SNN from one host to another? There is documentation online to move Namenode but I could not find one that explains a way to reliably move SNN to a new node.
... View more
Labels:
- Labels:
-
Apache Hadoop
04-25-2018
09:11 PM
It has happened second time today after yesterday. I restart Hive service and it started working but what's the issue here? My hive query fails with the error: java.io.InterruptedIOException: Interrupted while waiting for data to be acknowledged by pipeline
INFO : In order to change the average load for a reducer (in bytes): INFO : set mapreduce.job.reduces=<number>
INFO : set hive.exec.reducers.bytes.per.reducer=<number>
INFO : In order to limit the maximum number of reducers:
INFO : set hive.exec.reducers.max=<number>
INFO : In order to set a constant number of reducers:
INFO : set mapreduce.job.reduces=<number>
ERROR : Execution failed with exit status: 1
ERROR : Obtaining error information
ERROR :
Task failed!
Task ID:
Stage-20
Logs:
ERROR : <a href="http://ec2-35-154-150-43.ap-south-1.compute.amazonaws.com:8888/filebrowser/view=/var/log/hive/hadoop-cmf-CD-HIVE-XCVXskZf-HIVESERVER2-ip-172-31-4-192.ap-south-1.compute.internal.log.out">/var/log/hive/hadoop-cmf-CD-HIVE-XCVXskZf-HIVESERVER2-ip-172-31-4-192.ap-south-1.compute.internal.log.out</a>
ERROR : FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
WARN : Shutting down task : Stage-1:MAPRED
WARN : Shutting down task : Stage-7:MAPRED
WARN : Shutting down task : Stage-11:MAPRED
INFO : Completed executing command(queryId=hive_20180425102525_88c62c1c-a506-4756-9ee4-87f218852e45); Time taken: 0.188 seconds
INFO : Cleaning up the staging area <a href="http://ec2-35-154-150-43.ap-south-1.compute.amazonaws.com:8888/filebrowser/view=/user/hue/.staging/%3Ca%20href=">job_1524082403477_35313</a>" target="_blank">/user/hue/.staging/<a href="http://ec2-35-154-150-43.ap-south-1.compute.amazonaws.com:8888/jobbrowser/jobs/job_1524082403477_35313">job_1524082403477_35313</a>
INFO : Cleaning up the staging area <a href="http://ec2-35-154-150-43.ap-south-1.compute.amazonaws.com:8888/filebrowser/view=/user/hue/.staging/%3Ca%20href=">job_1524082403477_35312</a>" target="_blank">/user/hue/.staging/<a href="http://ec2-35-154-150-43.ap-south-1.compute.amazonaws.com:8888/jobbrowser/jobs/job_1524082403477_35312">job_1524082403477_35312</a>
INFO : Cleaning up the staging area <a href="http://ec2-35-154-150-43.ap-south-1.compute.amazonaws.com:8888/filebrowser/view=/user/hue/.staging/%3Ca%20href=">job_1524082403477_35314</a>" target="_blank">/user/hue/.staging/<a href="http://ec2-35-154-150-43.ap-south-1.compute.amazonaws.com:8888/jobbrowser/jobs/job_1524082403477_35314">job_1524082403477_35314</a>
ERROR : Job Submission failed with exception 'java.io.InterruptedIOException(Interrupted while waiting for data to be acknowledged by pipeline)'
java.io.InterruptedIOException: Interrupted while waiting for data to be acknowledged by pipeline
at org.apache.hadoop.hdfs.DFSOutputStream.waitForAckedSeqno(DFSOutputStream.java:2520)
at org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:2498)
at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2662)
at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2621)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:369)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:341)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292)
at org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:203)
at org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:128)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:578)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:573)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:573)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:564)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:418)
at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:142)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:80)
ERROR : Job Submission failed with exception 'java.io.InterruptedIOException(Interrupted while waiting for data to be acknowledged by pipeline)'
java.io.InterruptedIOException: Interrupted while waiting for data to be acknowledged by pipeline
at org.apache.hadoop.hdfs.DFSOutputStream.waitForAckedSeqno(DFSOutputStream.java:2520)
at org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:2498)
at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2662)
at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2621)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:369)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:341)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292)
at org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:203)
at org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:128)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:578)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:573)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:573)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:564)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:418)
at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:142)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:80)
ERROR : Job Submission failed with exception 'java.io.InterruptedIOException(Interrupted while waiting for data to be acknowledged by pipeline)'
java.io.InterruptedIOException: Interrupted while waiting for data to be acknowledged by pipeline
at org.apache.hadoop.hdfs.DFSOutputStream.waitForAckedSeqno(DFSOutputStream.java:2520)
at org.apache.hadoop.hdfs.DFSOutputStream.flushInternal(DFSOutputStream.java:2498)
at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2662)
at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2621)
at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:61)
at org.apache.hadoop.io.IOUtils.copyBytes(IOUtils.java:119)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:369)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:341)
at org.apache.hadoop.fs.FileUtil.copy(FileUtil.java:292)
at org.apache.hadoop.mapreduce.JobResourceUploader.copyRemoteFiles(JobResourceUploader.java:203)
at org.apache.hadoop.mapreduce.JobResourceUploader.uploadFiles(JobResourceUploader.java:128)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:99)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:194)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:578)
at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:573)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)
at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:573)
at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:564)
at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:418)
at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:142)
at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:214)
at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:100)
at org.apache.hadoop.hive.ql.exec.TaskRunner.run(TaskRunner.java:80)
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask (state=08S01,code=1)
Closing: 0: jdbc:hive2://ip-172-31-4-192.ap-south-1.compute.internal:10000/default
Intercepting System.exit(2)
Failing Oozie Launcher, Main class [org.apache.oozie.action.hadoop.Hive2Main], exit code [2]
... View more
Labels:
- Labels:
-
Apache Hive
04-21-2018
12:34 PM
The table however still renders like:
"SW-1951","21043","nikhil","Medium","Ready For QA","Feed - Update promotions for Google Merchant Center",3600,NA,"2018-04-21T15:34:12.038+0530"
This info is coming in a single column
If I set field.delim to ',', It spreads with values in columns but then values are coming in " (quotes) and some of the integer values do not come and appear as NULL.
What's the right way to do this
Storage information from DESC formatted for my table:
# Storage Information
NULL
NULL
34
SerDe Library:
org.apache.hadoop.hive.serde2.OpenCSVSerde
NULL
35
InputFormat:
org.apache.hadoop.mapred.TextInputFormat
NULL
36
OutputFormat:
org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat
NULL
37
Compressed:
No
NULL
38
Num Buckets:
-1
NULL
39
Bucket Columns:
[]
NULL
40
Sort Columns:
[]
NULL
41
Storage Desc Params:
NULL
NULL
42
escapeChar "\""
43
separatorChar ,
44
serialization.format 1
... View more
Labels:
- Labels:
-
Apache Hive
04-19-2018
09:39 AM
I am trying to distcp hdfs data to s3 and get this error: how do I fix it and why does it say Failed to close file? Exception in thread "pool-9-thread-1" java.lang.OutOfMemoryError: GC overhead limit exceededat java.util.Arrays.copyOf(Arrays.java:2367)at java.lang.AbstractStringBuilder.expandCapacity(AbstractStringBuilder.java:130)at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:114)at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:415)at java.lang.StringBuffer.append(StringBuffer.java:237)at java.net.URI.appendSchemeSpecificPart(URI.java:1892)at java.net.URI.toString(URI.java:1922)at java.net.URI.<init>(URI.java:749)at org.apache.hadoop.fs.Path.<init>(Path.java:109)at org.apache.hadoop.fs.Path.<init>(Path.java:94)at org.apache.hadoop.hdfs.protocol.HdfsFileStatus.getFullPath(HdfsFileStatus.java:230)at org.apache.hadoop.hdfs.protocol.HdfsFileStatus.makeQualified(HdfsFileStatus.java:263)at org.apache.hadoop.hdfs.DistributedFileSystem.listStatusInternal(DistributedFileSystem.java:772)at org.apache.hadoop.hdfs.DistributedFileSystem.access$600(DistributedFileSystem.java:110)at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:796)at org.apache.hadoop.hdfs.DistributedFileSystem$16.doCall(DistributedFileSystem.java:792)at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)at org.apache.hadoop.hdfs.DistributedFileSystem.listStatus(DistributedFileSystem.java:792)at org.apache.hadoop.tools.SimpleCopyListing$FileStatusProcessor.getFileStatus(SimpleCopyListing.java:444)at org.apache.hadoop.tools.SimpleCopyListing$FileStatusProcessor.processItem(SimpleCopyListing.java:485)at org.apache.hadoop.tools.util.ProducerConsumer$Worker.run(ProducerConsumer.java:189)at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)at java.lang.Thread.run(Thread.java:745)^C18/04/19 14:57:27 ERROR hdfs.DFSClient: Failed to close inode 66296599org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException): No lease on /user/centos/.staging/_distcp-940500037/fileList.seq (inode 66296599): File does not exist. Holder DFSClient_NONMAPREDUCE_186544109_1 does not have any open files.at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:3663)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFileInternal(FSNamesystem.java:3750)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.completeFile(FSNamesystem.java:3720)at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.complete(NameNodeRpcServer.java:745)at org.apache.hadoop.hdfs.server.namenode.AuthorizationProviderProxyClientProtocol.complete(AuthorizationProviderProxyClientProtocol.java:245)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.complete(ClientNamenodeProtocolServerSideTranslatorPB.java:540)at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2216)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2212)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:415)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1920)at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2210)at org.apache.hadoop.ipc.Client.call(Client.java:1472)at org.apache.hadoop.ipc.Client.call(Client.java:1409)at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:230)at com.sun.proxy.$Proxy23.complete(Unknown Source)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.complete(ClientNamenodeProtocolTranslatorPB.java:457)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:606)at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:256)at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:104)at com.sun.proxy.$Proxy24.complete(Unknown Source)at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2690)at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2667)at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2621)at org.apache.hadoop.hdfs.DFSClient.closeAllFilesBeingWritten(DFSClient.java:987)at org.apache.hadoop.hdfs.DFSClient.closeOutputStreams(DFSClient.java:1019)at org.apache.hadoop.hdfs.DistributedFileSystem.close(DistributedFileSystem.java:1022)at org.apache.hadoop.fs.FileSystem$Cache.closeAll(FileSystem.java:2897)at org.apache.hadoop.fs.FileSystem$Cache$ClientFinalizer.run(FileSystem.java:2914)at org.apache.hadoop.util.ShutdownHookManager$1.run(ShutdownHookManager.java:54)
... View more
Labels:
- Labels:
-
Apache Hadoop
-
HDFS
-
Security
04-04-2018
04:33 AM
I needed to whitelist port 8020 and 50071 on Hadoop cluster instance. Worked 🙂 Thank you!
... View more
04-04-2018
04:33 AM
I needed to whitelist port 8020 an 50071 on Hadoop cluster instance. Worked 🙂 Thank you!
... View more
04-03-2018
07:06 PM
I am using Cloudera with NIFI so I got my config files from Cloudera interface and replaced private IP to public. In core-site.xml, there is only use of 8020 port which I believe is not mapped to any other port @Matt Burgess
... View more
04-03-2018
06:41 PM
@Matt Burgess: What does this error indicate? HDFS Configuration error - org.apache.hadoop.net.ConnectTimeoutException: 1000 millis timeout while waiting for channel to be ready for connect. ch : java.nio.channels.SocketChanne
... View more