Member since
12-11-2015
79
Posts
26
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
4300 | 04-05-2016 12:14 PM | |
2572 | 12-14-2015 04:44 PM |
01-18-2016
12:18 PM
a) Datanodes are up and running. b) Will check the tcpdump and send the output
... View more
01-14-2016
05:31 AM
1 Kudo
Frequently getting these below error messages on datanode ERROR datanode.DataNode (DataXceiver.java:run(250)) - X.X.X.X6:50010:DataXceiver error processing READ_BLOCK operation src: /x.x.x.7:49636 dst: /x.x.x.6:50010
java.net.SocketTimeoutException: 480000 millis timeout while waiting for channel to be ready for write. ch : java.nio.channels.SocketChannel[connected local=/x.x.x.6:50010 remote=/x.x.x.7:49636]
at org.apache.hadoop.net.SocketIOWithTimeout.waitForIO(SocketIOWithTimeout.java:246)
at org.apache.hadoop.net.SocketOutputStream.waitForWritable(SocketOutputStream.java:172)
at org.apache.hadoop.net.SocketOutputStream.transferToFully(SocketOutputStream.java:220)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendPacket(BlockSender.java:547)
at org.apache.hadoop.hdfs.server.datanode.BlockSender.sendBlock(BlockSender.java:716)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.readBlock(DataXceiver.java:506)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.opReadBlock(Receiver.java:110)
at org.apache.hadoop.hdfs.protocol.datatransfer.Receiver.processOp(Receiver.java:68)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:232)
at java.lang.Thread.run(Thread.java:745)
... View more
Labels:
- Labels:
-
Apache Hadoop
12-17-2015
08:00 AM
fsck output ......Status: HEALTHY
Total size: 15311550475135 B (Total open files size: 36 B)
Total dirs: 543341
Total files: 1572526
Total symlinks: 0 (Files currently being written: 5)
Total blocks (validated): 1635306 (avg. block size 9363110 B) (Total open file blocks (not validated): 4)
Minimally replicated blocks: 1635306 (99.99999 %)
Over-replicated blocks: 0 (0.0 %)
Under-replicated blocks: 43091 (2.635042 %)
Mis-replicated blocks: 0 (0.0 %)
Default replication factor: 3
Average block replication: 2.9589655
Corrupt blocks: 0
Missing replicas: 67060 (1.3669327 %)
Number of data-nodes: 3
Number of racks: 1
... View more
12-17-2015
07:52 AM
DFSadmin report Configured Capacity: 107909857935360 (98.14 TB) Present Capacity: 107907646763463 (98.14 TB) DFS Remaining: 63959036886116 (58.17 TB) DFS Used: 43948609877347 (39.97 TB) DFS Used%: 40.73% Under replicated blocks: 35016 Blocks with corrupt replicas: 113 Missing blocks: 0 Datanodes available: 3 (3 total, 0 dead) Live datanodes: Name: x.x.x.x4 Hostname: x.x.x.x4 Decommission Status : Normal Configured Capacity: 35969952645120 (32.71 TB) DFS Used: 15310914557283 (13.93 TB) Non DFS Used: 821825081 (783.75 MB) DFS Remaining: 20658216262756 (18.79 TB) DFS Used%: 42.57% DFS Remaining%: 57.43% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 14 Last contact: Wed Dec 16 20:17:25 AEDT 2015 Name: x.x.x.x6 Hostname: x.x.x.x6 Decommission Status : Normal Configured Capacity: 35969952645120 (32.71 TB) DFS Used: 14348334051328 (13.05 TB) Non DFS Used: 497512448 (474.46 MB) DFS Remaining: 21621121081344 (19.66 TB) DFS Used%: 39.89% DFS Remaining%: 60.11% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1025 Last contact: Wed Dec 16 20:17:23 AEDT 2015 Name: x.x.x.x5 Hostname: x.x.x.x5 Decommission Status : Normal Configured Capacity: 35969952645120 (32.71 TB) DFS Used: 14289361268736 (13.00 TB) Non DFS Used: 891834368 (850.52 MB) DFS Remaining: 21679699542016 (19.72 TB) DFS Used%: 39.73% DFS Remaining%: 60.27% Configured Cache Capacity: 0 (0 B) Cache Used: 0 (0 B) Cache Remaining: 0 (0 B) Cache Used%: 100.00% Cache Remaining%: 0.00% Xceivers: 1025
... View more
12-16-2015
09:42 AM
1 Kudo
Ambari UI shows corrupt blocks, fsck output shows not corrupt blocks and filesystem under / as healthy. Also when i run dfsadmin report it shows 'blocks with corrupt replicas ' same number as ambari showing
... View more
Labels:
- Labels:
-
Apache Ambari
-
Apache Hadoop
12-14-2015
02:47 PM
total used free shared buffers cached
Mem: 251 2 249 0 0 0
-/+ buffers/cache: 1 250
Swap: 7 0 7
... View more
12-14-2015
02:31 PM
ambari agent hung and gives error '/usr/sbin ambari-agent : fork : cannot allocate memory'
... View more
Labels:
- Labels:
-
Apache Ambari
- « Previous
- Next »