Support Questions

Find answers, ask questions, and share your expertise
Announcements
Celebrating as our community reaches 100,000 members! Thank you!

Not able to place enough replicas, still in need of 1

avatar

Can anyone know exactly what is the below error:

 

2014-08-21 17:37:11,578 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 1
2014-08-21 17:37:11,578 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:root cause:java.io.IOException: File /logs/prod/apache/2014/08/19/web07.prod.hs18.lan.1408622705069.tmp could only be replicated to 0 nodes, instead of 1

 

 

Presently i have the replication factor = 1.

 

 

Here is the complete error log file:

 

at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1637)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:757)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)
2014-08-21 17:42:11,106 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Not able to place enough replicas, still in need of 1
2014-08-21 17:42:11,106 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:root cause:java.io.IOException: File /logs/prod/apache/2014/08/19/web07.prod.hs18.lan.1408623004579.tmp could only be replicated to 0 nodes, instead of 1
2014-08-21 17:42:11,106 INFO org.apache.hadoop.ipc.Server: IPC Server handler 0 on 9000, call addBlock(/logs/prod/apache/2014/08/19/web07.prod.hs18.lan.1408623004579.tmp, DFSClient_NONMAPREDUCE_2060617957_26, [Lorg.apache.hadoop.hdfs.protocol.DatanodeInfo;@26ac92f0) from 172.16.10.25:58118: error: java.io.IOException: File /logs/prod/apache/2014/08/19/web07.prod.hs18.lan.1408623004579.tmp could only be replicated to 0 nodes, instead of 1
java.io.IOException: File /logs/prod/apache/2014/08/19/web07.prod.hs18.lan.1408623004579.tmp could only be replicated to 0 nodes, instead of 1
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1637)
at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:757)
at sun.reflect.GeneratedMethodAccessor19.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)
at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:396)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)
at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)

 

 

 

-Thankyou

 

1 ACCEPTED SOLUTION

avatar
Explorer
This looks like a permissions problem, you should look to see who has permissions for creating files under any of the following directories: /logs/prod/apache/2014/08/19/ If it is not root root or something to that liking that allows root to write to it, then you may need to give permissions

View solution in original post

4 REPLIES 4

avatar

How large is your cluster and what are you doing that triggers this message? Does this happen consistently?

Regards,
Gautam Gopalakrishnan

avatar

Below is my cluster configuration:

 

Cluster Summary

208423 files and directories, 200715 blocks = 409138 total. Heap Size is 369.31 MB / 2.67 GB (13%) 

Configured Capacity:3.7 TB
DFS Used:1.93 TB
Non DFS Used:309.38 GB
DFS Remaining:1.47 TB
DFS Used%:52.19 %
DFS Remaining%:39.64 %
Live Nodes:3
Dead Nodes:0
Decommissioning Nodes:1
Number of Under-Replicated Blocks:11025

 

It will happen consisitantly when i ran my flume agent to pull the logs to HDFS.

 

thankyou

avatar
Explorer
This looks like a permissions problem, you should look to see who has permissions for creating files under any of the following directories: /logs/prod/apache/2014/08/19/ If it is not root root or something to that liking that allows root to write to it, then you may need to give permissions

avatar

Thanks for the reply.

 

My problem is fixed finallly as you told it is necessary that flume sink must have the 777 permission to write the data to HDFS from webserver.

 

 

 

Now my flime ng is working without having any issues.

 

 

 

 

 

-Thankyou