Support Questions
Find answers, ask questions, and share your expertise
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Flume error:could only be replicated to 0 nodes, instead of 1

Flume error:could only be replicated to 0 nodes, instead of 1



I got the below error when I ran flume-ng,





when am start the flume-ng am getting the below error,




2014-08-04 17:54:50,417 WARN hdfs.HDFSEventSink: HDFS IO error

org.apache.hadoop.ipc.RemoteException: File /logs/prod/jboss/2014/08/04/ could only be replicated to 0 nodes, instead of 1

        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(

        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(

        at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(

        at java.lang.reflect.Method.invoke(

        at org.apache.hadoop.ipc.RPC$

        at org.apache.hadoop.ipc.Server$Handler$

        at org.apache.hadoop.ipc.Server$Handler$

        at Method)



        at org.apache.hadoop.ipc.Server$



i done the below changes as per the suggestions given in google user group:!topic/cdh-user/4saUW5MW53M



try'd to set Flume-NGs hdfs sink parameter maxOpenFiles to 10




if you check in the above forum they given the below suggestion:


Reducing the Block size from 64M to 5M fixed this problem




Could you please suggest me how can i fix this problem?


my current block size = 134217728,


is it really works when i reduce my block size to 53?







Don't have an account?
Coming from Hortonworks? Activate your account here