Member since
06-22-2016
41
Posts
4
Kudos Received
1
Solution
My Accepted Solutions
Title | Views | Posted |
---|---|---|
796 | 07-21-2016 01:27 AM |
10-31-2017
12:38 AM
The error happens for larger file 300MB +. But I have experience uploading file > 5G as well.
... View more
10-30-2017
02:25 PM
Yes I am using hostname. The IP seems to be automatically translated from hostname.
... View more
10-30-2017
01:42 PM
The log is : logs.txt java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1590) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1525) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1438) at java.lang.Thread.run(Thread.java:745) 2017-10-30 19:04:52,235 INFO datanode.DataNode (BlockReceiver.java:run(1449)) - PacketResponder: BP-2139487625-10.11.12.11-1447775100056:blk_1073910780_170751, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=2:[10.11.12.11:50010, 10.11.12.12:50010] java.io.IOException: Connection reset by peer at sun.nio.ch.FileDispatcherImpl.write0(Native Method) at sun.nio.ch.SocketDispatcher.write(SocketDispatcher.java:47) at sun.nio.ch.IOUtil.writeFromNativeBuffer(IOUtil.java:93) at sun.nio.ch.IOUtil.write(IOUtil.java:65) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:471) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1590) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1525) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1438) at java.lang.Thread.run(Thread.java:745) 2017-10-30 19:04:52,236 INFO datanode.DataNode (BlockReceiver.java:run(1463)) - PacketResponder: BP-2139487625-10.11.12.11-1447775100056:blk_1073910780_170751, type=HAS_DOWNSTREAM_IN_PIPELINE, downstreams=2:[10.11.12.11:50010, 10.11.12.12:50010] terminating 2017-10-30 19:04:52,242 INFO datanode.DataNode (DataXceiver.java:writeBlock(669)) - Receiving BP-2139487625-10.11.12.11-1447775100056:blk_1073910780_170751 src: /10.11.12.15:48177 dest: /10.11.12.15:50010 2017-10-30 19:04:52,243 INFO impl.FsDatasetImpl (FsDatasetImpl.java:recoverClose(1306)) - Recover failed close BP-2139487625-10.11.12.11-1447775100056:blk_1073910780_170751 2017-10-30 19:05:52,248 ERROR datanode.DataNode (DataXceiver.java:writeBlock(787)) - DataNode{data=FSDataset{dirpath='[/hadoop/hdfs/data/current, /hdfs/current]'}, localName='comu5.baidu.cn:50010', datanodeUuid='2d2bf8fa-6617-43b5-a379-5d236e6c0987', xmitsInProgress=0}:Exception transfering block BP-2139487625-10.11.12.11-1447775100056:blk_1073910780_170751 to mirror 10.11.12.11:50010: java.io.EOFException: Premature EOF: no length prefix available 2017-10-30 19:05:52,249 INFO datanode.DataNode (DataXceiver.java:writeBlock(850)) - opWriteBlock BP-2139487625-10.11.12.11-1447775100056:blk_1073910780_170751 received exception java.io.EOFException: Premature EOF: no length prefix available 2017-10-30 19:05:52,249 ERROR datanode.DataNode (DataXceiver.java:run(278)) - comu5.baidu.cn:50010:DataXceiver error processing WRITE_BLOCK operation src: /10.11.12.15:48177 dst: /10.11.12.15:50010 java.io.EOFException: Premature EOF: no length prefix available at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2464) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:758) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) at java.lang.Thread.run(Thread.java:745)
... View more
10-30-2017
09:46 AM
I am not able to load files into hdfs. I get following error. The file size if 300MB. Also splitting files into smaller one works with sporadic error. All datanodes DatanodeInfoWithStorage[10.11.12.11:50010,DS-835fbe86-c1f5-4967-80a4-1e84e7854425,DISK] are bad. d-98a8-4c9c-9bcc-de6ada2d290c,DISK]: bad datanode DatanodeInfoWithStorage[10.11.12.14:50010,DS-e6fd4e6d-98a8-4c9c-9bcc-de6ada2d290c,DISK]<br>17/10/30 15:17:45 INFO hdfs.DFSClient: Exception in createBlockOutputStream<br>java.io.EOFException: Premature EOF: no length prefix available<br> at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2464)<br> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1461)<br> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1302)<br> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:999)<br> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:506)<br>17/10/30 15:17:45 WARN hdfs.DFSClient: Error Recovery for block BP-2139487625-10.70.12.115-1447775100056:blk_1073910746_170708 in pipeline DatanodeInfoWithStorage[10.11.12.11:50010,DS-43a97303-ce81-4953-9adb-560131c8a440,DISK], DatanodeInfoWithStorage[10.11.12.12:50010,DS-78bef79e-6a05-4118-9ccd-fe10a88df453,DISK]: bad datanode DatanodeInfoWithStorage[:50010,DS-43a97303-ce81-4953-9adb-560131c8a440,DISK]<br>17/10/30 15:18:50 INFO hdfs.DFSClient: Exception in createBlockOutputStream<br>java.net.SocketTimeoutException: 65000 millis timeout while waiting for channel to be ready for read. ch : java.nio.channels.SocketChannel[connected local=/10.70.12.119:54084 remote=/10.70.12.118:50010]<br> at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:164)<br> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:161)<br> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:131)<br> at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:118)<br> at java.io.FilterInputStream.read(FilterInputStream.java:83)<br> at java.io.FilterInputStream.read(FilterInputStream.java:83)<br> at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(PBHelper.java:2462)<br> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1461)<br> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1302)<br> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:999)<br> at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:506)<br>put: All datanodes DatanodeInfoWithStorage[10.11.12.11:50010,DS-78bef79e-6a05-4118-9ccd-fe10a88df453,DISK] are bad. Aborting...
... View more
Labels:
- Labels:
-
Apache Hadoop
10-20-2017
03:08 PM
Thanks got the idea but my implementation didn't work. I will ask new question for that.
... View more
10-19-2017
04:46 AM
We have file in certain directory/ftp from 10MB to 100MB at interval of 1 min to hours. What would be the proper architecture to harvest data for real time consumption and batch analysis. I have though of following architecture: files --> flink --> Hbase (Real time query) files --> flink --> HDFS or files --> HDFS --> flink --> Hbase What is the most appropriate architecture for this purpose? What tools should I use? I want to use flink because I would like to get metric during transformation.
... View more
Labels:
- Labels:
-
Apache HBase
09-28-2017
03:06 PM
I had done this in one of the node but forgot to perform this on all of the nodes. It works seemlessly.
... View more
09-28-2017
04:45 AM
I upgraded from HDP 2.4 to 2.6 using express upgrade. All services are running fine except Nimbus. I get following error: java.lang.NoClassDefFoundError: backtype/storm/metric/IClusterReporter
at java.lang.ClassLoader.defineClass1(Native Method) ~[?:1.8.0_40]
at java.lang.ClassLoader.defineClass(ClassLoader.java:760) ~[?:1.8.0_40]
at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:142) ~[?:1.8.0_40]
at java.net.URLClassLoader.defineClass(URLClassLoader.java:467) ~[?:1.8.0_40]
at java.net.URLClassLoader.access$100(URLClassLoader.java:73) ~[?:1.8.0_40]
at java.net.URLClassLoader$1.run(URLClassLoader.java:368) ~[?:1.8.0_40]
at java.net.URLClassLoader$1.run(URLClassLoader.java:362) ~[?:1.8.0_40]
at java.security.AccessController.doPrivileged(Native Method) ~[?:1.8.0_40]
at java.net.URLClassLoader.findClass(URLClassLoader.java:361) ~[?:1.8.0_40]
at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[?:1.8.0_40]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) ~[?:1.8.0_40]
at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[?:1.8.0_40]
at java.lang.Class.forName0(Native Method) ~[?:1.8.0_40]
at java.lang.Class.forName(Class.java:264) ~[?:1.8.0_40]
at org.apache.storm.metric.ClusterMetricsConsumerExecutor.prepare(ClusterMetricsConsumerExecutor.java:45) ~[storm-core-1.1.0.2.6.2.0-205.jar:1.1.0.2.6.2.0-205]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[?:1.8.0_40]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) ~[?:1.8.0_40]
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) ~[?:1.8.0_40]
at java.lang.reflect.Method.invoke(Method.java:497) ~[?:1.8.0_40]
at clojure.lang.Reflector.invokeMatchingMethod(Reflector.java:93) ~[clojure-1.7.0.jar:?]
at clojure.lang.Reflector.invokeNoArgInstanceMember(Reflector.java:313) ~[clojure-1.7.0.jar:?]
at org.apache.storm.daemon.nimbus$fn__9790$exec_fn__3654__auto____9791.invoke(nimbus.clj:2469) ~[storm-core-1.1.0.2.6.2.0-205.jar:1.1.0.2.6.2.0-205]
at clojure.lang.AFn.applyToHelper(AFn.java:156) ~[clojure-1.7.0.jar:?]
at clojure.lang.AFn.applyTo(AFn.java:144) ~[clojure-1.7.0.jar:?]
at clojure.core$apply.invoke(core.clj:630) ~[clojure-1.7.0.jar:?]
at org.apache.storm.daemon.nimbus$fn__9790$service_handler__9823.doInvoke(nimbus.clj:2446) ~[storm-core-1.1.0.2.6.2.0-205.jar:1.1.0.2.6.2.0-205]
at clojure.lang.RestFn.invoke(RestFn.java:421) ~[clojure-1.7.0.jar:?]
at org.apache.storm.daemon.nimbus$launch_server_BANG_.invoke(nimbus.clj:2534) ~[storm-core-1.1.0.2.6.2.0-205.jar:1.1.0.2.6.2.0-205]
at org.apache.storm.daemon.nimbus$_launch.invoke(nimbus.clj:2567) ~[storm-core-1.1.0.2.6.2.0-205.jar:1.1.0.2.6.2.0-205]
at org.apache.storm.daemon.nimbus$_main.invoke(nimbus.clj:2590) ~[storm-core-1.1.0.2.6.2.0-205.jar:1.1.0.2.6.2.0-205]
at clojure.lang.AFn.applyToHelper(AFn.java:152) ~[clojure-1.7.0.jar:?]
at clojure.lang.AFn.applyTo(AFn.java:144) ~[clojure-1.7.0.jar:?]
at org.apache.storm.daemon.nimbus.main(Unknown Source) ~[storm-core-1.1.0.2.6.2.0-205.jar:1.1.0.2.6.2.0-205]
Caused by: java.lang.ClassNotFoundException: backtype.storm.metric.IClusterReporter
at java.net.URLClassLoader.findClass(URLClassLoader.java:381) ~[?:1.8.0_40]
at java.lang.ClassLoader.loadClass(ClassLoader.java:424) ~[?:1.8.0_40]
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) ~[?:1.8.0_40]
at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ~[?:1.8.0_40]
... 33 more
... View more
Labels:
09-27-2017
02:16 AM
Easy way to detect duplicate value is: select component_name, service_name, host_id, cluster_id,count(*) from ambari.hostcomponentdesiredstate group by component_name, service_name, host_id, cluster_id order by count desc; select component_name, service_name, host_id, cluster_id,count(*) from ambari.hostcomponentstate group by component_name, service_name, host_id, cluster_id order by count desc; You will find that count of one of the table is different from other. Just delete that by id and you are good to go.
... View more
01-03-2017
09:17 AM
hdfs dfs -mkdir /user/admin/matthew might be required.
... View more