- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Not able to place enough replicas, still in need of 1
- Labels:
-
HDFS
Created ‎08-21-2014 05:35 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can anyone know exactly what is the below error:
2014-08-21 17:37:11,578 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 1
2014-08-21 17:37:11,578 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:root cause:java.io.IOException: File /logs/prod/apache/2014/08/19/web07.prod.hs18.lan.1408622705069.tmp could only be replicated to 0 nodes, instead of 1
Presently i have the replication factor = 1.
Here is the complete error log file:
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1637)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:757)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
2014-08-21 17:42:11,106 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 1
2014-08-21 17:42:11,106 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:root cause:java.io.IOException: File /logs/prod/apache/2014/08/19/web07.prod.hs18.lan.1408623004579.tmp could only be replicated to 0 nodes, instead of 1
2014-08-21 17:42:11,106 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000, call addBlock(/logs/prod/apache/2014/08/19/web07.prod.hs18.lan.1408623004579.tmp, DFSClient_NONMAPREDUCE_2060617957_26, [Lorg.apache.hadoop.hdfs.protocol.DatanodeInfo;@26ac92f0) from 172.16.10.25:58118: error: java.io.IOException: File /logs/prod/apache/2014/08/19/web07.prod.hs18.lan.1408623004579.tmp could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /logs/prod/apache/2014/08/19/web07.prod.hs18.lan.1408623004579.tmp could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1637)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:757)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
-Thankyou
Created ‎08-28-2014 09:22 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Created ‎08-24-2014 09:57 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
How large is your cluster and what are you doing that triggers this message? Does this happen consistently?
Gautam Gopalakrishnan
Created ‎08-24-2014 10:24 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Below is my cluster configuration:
Cluster Summary
208423 files and directories, 200715 blocks = 409138 total. Heap Size is 369.31 MB / 2.67 GB (13%)
Configured Capacity | : | 3.7 TB |
DFS Used | : | 1.93 TB |
Non DFS Used | : | 309.38 GB |
DFS Remaining | : | 1.47 TB |
DFS Used% | : | 52.19 % |
DFS Remaining% | : | 39.64 % |
Live Nodes | : | 3 |
Dead Nodes | : | 0 |
Decommissioning Nodes | : | 1 |
Number of Under-Replicated Blocks | : | 11025 |
It will happen consisitantly when i ran my flume agent to pull the logs to HDFS.
thankyou
Created ‎08-28-2014 09:22 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Created ‎08-29-2014 02:09 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thanks for the reply.
My problem is fixed finallly as you told it is necessary that flume sink must have the 777 permission to write the data to HDFS from webserver.
Now my flime ng is working without having any issues.
-Thankyou
