Member since
12-13-2015
7
Posts
7
Kudos Received
0
Solutions
05-31-2016
07:11 PM
@Sagar Shimpi whenever i run ifconfig -a i get the following output. eth0 Link encap:Ethernet HWaddr 2C:59:E5:3A:AB:60
inet addr:10.200.146.164 Bcast:10.200.146.191 Mask:255.255.255.224
inet6 addr: fe80::2e59:e5ff:fe3a:ab60/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:133963209 errors:0 dropped:0 overruns:0 frame:0
TX packets:122587120 errors:5654707 dropped:0 overruns:0 carrier:56547 So in TX packets i see a lot of errors will that cause any problem??
... View more
05-31-2016
07:08 PM
@Kuldeep Kulkarni Kuldeep i am using physical servers. I have disabled thp on all the machines. MTU is set to 1500 on all nodes. 1 disk for hdfs is configured on each node. I have mounted my disk to a directory /hadoop on the same place where /home is mounted. The output of iostat in all the nodes is as follows. Linux 2.6.32-573.26.1.el6.x86_64 (HadoopMaster) 06/01/2016 _x86_64_(8 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
1.94 0.00 0.35 0.00 0.00 97.72
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 3.14 14.90 84.24 34145512 193106040
iostat
Linux 2.6.32-573.26.1.el6.x86_64 (HadoopSlave1) 06/01/2016 _x86_64_(8 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.88 0.00 0.25 0.00 0.00 98.87
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 3.04 4.15 61.54 9603544 142454696
Linux 2.6.32-573.26.1.el6.x86_64 (HadoopSlave2) 06/01/2016 _x86_64_(24 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
1.14 0.00 0.19 0.00 0.00 98.67
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 5.07 0.54 361.55 1252416 836898568
sdb 0.00 0.00 0.00 3349 0
Linux 2.6.32-573.26.1.el6.x86_64 (HadoopSlave3) 06/01/2016 _x86_64_(8 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
0.91 0.00 0.19 0.06 0.00 98.83
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 1.82 0.57 53.04 1319364 122732448
Linux 2.6.32-573.26.1.el6.x86_64 (HadoopSlave4) 06/01/2016 _x86_64_(2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
3.31 0.00 0.84 0.00 0.00 95.85
Device: tps Blk_read/s Blk_wrtn/s Blk_read Blk_wrtn
sda 1.43 0.57 39.10 1308076 89922904
... View more
05-31-2016
06:58 PM
@nmaheshwari I am currently using hdp2.4 Yeah all my hive queries do get executed and all the files are also copied but they do take a lot of time. Even simple "select count(1) from a very small set of table takes a lot of time.
... View more
05-31-2016
06:56 PM
@Sagar Shimpi The duplex mode was half on all the machines so i switched it back to full. And the mtu value was set to 1500 i tried it changing to 9000. The moment i changed mtu to 9000 every time i copy the data i get bad datanode exception.
... View more
05-24-2016
10:11 AM
1 Kudo
@Kuldeep Kulkarni I have observed that copying data to hdfs takes a lot of time. I was triyng to copy data to hdfs of a 100mb file. I never experienced this issue earlier. I have found the following logs on my datanode. 76 INFO datanode.DataNode (DataXceiver.java:writeBlock(658)) - Receiving BP-1475253775-10.200.146.164-1463754036445:blk_1073742241_1417 src: /10.200.146.165:51570 dest: /10.200.146.165:50010
2016-05-24 15:33:03,397 WARN datanode.DataNode (BlockReceiver.java:receivePacket(563)) - Slow BlockReceiver write packet to mirror took 488ms (threshold=300ms)
2016-05-24 15:33:05,175 WARN datanode.DataNode (BlockReceiver.java:receivePacket(563)) - Slow BlockReceiver write packet to mirror took 327ms (threshold=300ms)
2016-05-24 15:33:07,961 WARN datanode.DataNode (BlockReceiver.java:receivePacket(563)) - Slow BlockReceiver write packet to mirror took 334ms (threshold=300ms)
2016-05-24 15:33:11,061 WARN datanode.DataNode (BlockReceiver.java:receivePacket(563)) - Slow BlockReceiver write packet to mirror took 426ms (threshold=300ms)
2016-05-24 15:33:17,277 WARN datanode.DataNode (BlockReceiver.java:receivePacket(563)) - Slow BlockReceiver write packet to mirror took 336ms (threshold=300ms)
... View more
Labels:
- Labels:
-
Apache Hadoop
05-23-2016
06:08 AM
3 Kudos
Thank you @Kuldeep Kulkarni as you said time on all the nodes on the cluster were out of sync. Changing them did the trick. Hortonworks community is far far better than cloudera. You guys quickly respond whenever i am stuck with some serious issues.
... View more
05-22-2016
02:20 PM
3 Kudos
Whenever i try to copy the data to hdfs i get the following exception. Sometimes the data is copied and sometimes it wont. 16/05/23 01:40:26 INFO hdfs.DFSClient: Exception in createBlockOutputStream org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: Got access token error, status message , ack with firstBadLink as 10.200.146.167:50010 at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1393) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1295) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:463) 16/05/23 01:40:26 INFO hdfs.DFSClient: Abandoning BP-1475253775-10.200.146.164-1463754036445:blk_1073742227_1403 16/05/23 01:40:26 INFO hdfs.DFSClient: Excluding datanode DatanodeInfoWithStorage[10.200.146.167:50010,DS-3d4a4a18-98eb-40b0-acb2-f1e454a67ee7,DISK] 16/05/23 01:40:26 INFO hdfs.DFSClient: Exception in createBlockOutputStream org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: Got access token error, status message , ack with firstBadLink as 10.200.146.172:50010 at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1393) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1295) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:463) 16/05/23 01:40:26 INFO hdfs.DFSClient: Abandoning BP-1475253775-10.200.146.164-1463754036445:blk_1073742228_1404 16/05/23 01:40:26 INFO hdfs.DFSClient: Excluding datanode DatanodeInfoWithStorage[10.200.146.172:50010,DS-1b41638c-ddff-4409-9ca0-f8b4ecbb46d6,DISK] 16/05/23 01:40:26 INFO hdfs.DFSClient: Exception in createBlockOutputStream org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: Got access token error, status message , ack with firstBadLink as 10.200.146.168:50010 at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1393) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1295) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:463) 16/05/23 01:40:26 INFO hdfs.DFSClient: Abandoning BP-1475253775-10.200.146.164-1463754036445:blk_1073742229_1405 16/05/23 01:40:26 INFO hdfs.DFSClient: Excluding datanode DatanodeInfoWithStorage[10.200.146.168:50010,DS-89f60613-85eb-4ec8-a571-f6dee904bc57,DISK] 16/05/23 01:40:26 INFO hdfs.DFSClient: Exception in createBlockOutputStream org.apache.hadoop.hdfs.security.token.block.InvalidBlockTokenException: Got access token error, status message , ack with firstBadLink as 10.200.146.166:50010 at org.apache.hadoop.hdfs.protocol.datatransfer.DataTransferProtoUtil.checkBlockOpStatus(DataTransferProtoUtil.java:134) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.createBlockOutputStream(DFSOutputStream.java:1393) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1295) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:463) 16/05/23 01:40:26 INFO hdfs.DFSClient: Abandoning BP-1475253775-10.200.146.164-1463754036445:blk_1073742230_1406 16/05/23 01:40:26 INFO hdfs.DFSClient: Excluding datanode DatanodeInfoWithStorage[10.200.146.166:50010,DS-e19737ba-1f63-444a-b22b-1210c75c6ad5,DISK] 16/05/23 01:40:26 WARN hdfs.DFSClient: DataStreamer Exception java.io.IOException: Unable to create new block. at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1308) at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:463) 16/05/23 01:40:26 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/root/postgres-xl-9.5r1.tar.bz2._COPYING_" - Aborting... put: Got access token error, status message , ack with firstBadLink as 10.200.146.166:50010 Individual datanode log is as follows: datanode2: 2016-05-23 00:31:17,766 WARN datanode.DataNode (DataXceiver.java:checkAccess(1311)) - Block token verification failed: op=WRITE_BLOCK, remoteAddress=/10.200.146.173:40315, message=Block token with block_token_identifier (expiryDate=1463939895457, keyId=503794258, userId=root, blockPoolId=BP-1475253775-10.200.146.164-1463754036445, blockId=1073742225, access modes=[WRITE]) is expired. 2016-05-23 00:31:17,766 ERROR datanode.DataNode (DataXceiver.java:run(278)) - HadoopSlave9:50010:DataXceiver error processing WRITE_BLOCK operation src: /10.200.146.173:40315 dst: /10.200.146.173:50010 org.apache.hadoop.security.token.SecretManager$InvalidToken: Block token with block_token_identifier (expiryDate=1463939895457, keyId=503794258, userId=root, blockPoolId=BP-1475253775-10.200.146.164-1463754036445, blockId=1073742225, access modes=[WRITE]) is expired. at org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:280) at org.apache.hadoop.hdfs.security.token.block.BlockTokenSecretManager.checkAccess(BlockTokenSecretManager.java:301) at org.apache.hadoop.hdfs.security.token.block.BlockPoolTokenSecretManager.checkAccess(BlockPoolTokenSecretManager.java:97) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.checkAccess(DataXceiver.java:1296) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:629) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) at java.lang.Thread.run(Thread.java:745) datanode1: 2016-05-22 13:58:42,364 WARN datanode.DataNode (BlockReceiver.java:run(1389)) - IOException in BlockReceiver.run(): java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:478) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1531) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1468) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1381) at java.lang.Thread.run(Thread.java:745) 2016-05-22 13:58:42,364 INFO datanode.DataNode (BlockReceiver.java:run(1392)) - PacketResponder: BP-1475253775-10.200.146.164-1463754036445:blk_1073742234_1410, type=HAS_DOWNSTREAM_IN_PIPELINE java.nio.channels.ClosedByInterruptException at java.nio.channels.spi.AbstractInterruptibleChannel.end(AbstractInterruptibleChannel.java:202) at sun.nio.ch.SocketChannelImpl.write(SocketChannelImpl.java:478) at org.apache.hadoop.net.SocketOutputStream$Writer.performIO(SocketOutputStream.java:63) at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:142) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:159) at org.apache.hadoop.net.SocketOutputStream.write(SocketOutputStream.java:117) at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:82) at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:140) at java.io.DataOutputStream.flush(DataOutputStream.java:123) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstreamUnprotected(BlockReceiver.java:1531) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.sendAckUpstream(BlockReceiver.java:1468) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1381) at java.lang.Thread.run(Thread.java:745) 2016-05-22 13:58:42,364 INFO datanode.DataNode (BlockReceiver.java:run(1406)) - PacketResponder: BP-1475253775-10.200.146.164-1463754036445:blk_1073742234_1410, type=HAS_DOWNSTREAM_IN_PIPELINE terminating 2016-05-22 13:58:42,365 INFO datanode.DataNode (DataXceiver.java:writeBlock(838)) - opWriteBlock BP-1475253775-10.200.146.164-1463754036445:blk_1073742234_1410 received exception java.io.IOException: Premature EOF from inputStream 2016-05-22 13:58:42,365 ERROR datanode.DataNode (DataXceiver.java:run(278)) - HadoopMaster:50010:DataXceiver error processing WRITE_BLOCK operation src: /10.200.146.164:55515 dst: /10.200.146.164:50010 java.io.IOException: Premature EOF from inputStream at org.apache.hadoop.io.IOUtils.readFully(IOUtils.java:201) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doReadFully(PacketReceiver.java:213) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.doRead(PacketReceiver.java:134) at org.apache.hadoop.hdfs.protocol.datatransfer.PacketReceiver.receiveNextPacket(PacketReceiver.java:109) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receivePacket(BlockReceiver.java:502) at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:896) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:805) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opWriteBlock(Receiver.java:137) at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:74) at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:251) at java.lang.Thread.run(Thread.java:745)
... View more
Labels:
- Labels:
-
Apache Hadoop