Support Questions

Find answers, ask questions, and share your expertise

hdfs block size reducing

avatar

  Hi,

 

 

 

when am start the flume-ng am getting the below error,

 

 

at java.lang.Thread.run(Thread.java:662)

2014-08-04 17:54:50,417 WARN hdfs.HDFSEventSink: HDFS IO error

org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /logs/prod/jboss/2014/08/04/web07.prod.hs18.lan.1407154543459.tmp could only be replicated to 0 nodes, instead of 1

        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1637)

        at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:757)

        at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)

        at java.lang.reflect.Method.invoke(Method.java:597)

        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:578)

        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1393)

        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:1389)

        at java.security.AccessController.doPrivileged(Native Method)

        at javax.security.auth.Subject.doAs(Subject.java:396)

        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1136)

        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:1387)

 

 

i done the below changes as per the suggestions given in google user group:

 

https://groups.google.com/a/cloudera.org/forum/#!topic/cdh-user/4saUW5MW53M

 

Step1:

try'd to set Flume-NGs hdfs sink parameter maxOpenFiles to 10

 

Step2:

 

if you check in the above forum they given the below suggestion:

 

Reducing the Block size from 64M to 5M fixed this problem

 

 

 

Could you please suggest me how can i fix this problem?

 

my current block size = 134217728,

 

is it really works when i reduce my block size to 5 mb?

 

 

-Thankyou

1 ACCEPTED SOLUTION

avatar

I fix the problem.

 

I come to know that it is not a flume issue it is purely HDFS issue, then i done the below steps

 

step1:

 

stop all the services

 

Step2:

 

started name node

 

then when am trying to start the data nodes on the 3 servers,one of the server throwig the error message 

/var/log/  ----No such file/directory

/var/run --No such file/directory

 

But these files are existing so i check the permissions on those two differ from second server to third server

 

So given the permission to those directories to  be in sink

 

and then started all the services then flume working fine,

 

thats it.

 

 

-Thankyou

 

 

View solution in original post

4 REPLIES 4

avatar
Mentor
Your DNs do not have adequate space to store files. Can you please
post your output of "sudo -u hdfs hdfs dfsadmin -report"?

avatar

here is the report

 

 

Configured Capacity: 4066320277504 (3.7 TB)
Present Capacity: 3690038079488 (3.36 TB)
DFS Remaining: 1233865269248 (1.12 TB)
DFS Used: 2456172810240 (2.23 TB)
DFS Used%: 66.56%
Under replicated blocks: 66
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 3 (3 total, 0 dead)

Name: 172.16.0.97:50010
Decommission Status : Normal
Configured Capacity: 1159672115200 (1.05 TB)
DFS Used: 807840399360 (752.36 GB)
Non DFS Used: 177503387648 (165.31 GB)
DFS Remaining: 174328328192(162.36 GB)
DFS Used%: 69.66%
DFS Remaining%: 15.03%
Last contact: Mon Aug 11 12:05:22 IST 2014


Name: 172.16.0.106:50010
Decommission Status : Normal
Configured Capacity: 1749056290816 (1.59 TB)
DFS Used: 833225805824 (776 GB)
Non DFS Used: 82293673984 (76.64 GB)
DFS Remaining: 833536811008(776.29 GB)
DFS Used%: 47.64%
DFS Remaining%: 47.66%
Last contact: Mon Aug 11 12:05:21 IST 2014


Name: 172.16.0.63:50010
Decommission Status : Normal
Configured Capacity: 1157591871488 (1.05 TB)
DFS Used: 815106605056 (759.13 GB)
Non DFS Used: 116485136384 (108.49 GB)
DFS Remaining: 226000130048(210.48 GB)
DFS Used%: 70.41%
DFS Remaining%: 19.52%
Last contact: Mon Aug 11 12:05:21 IST 2014

avatar

Harsh J

please check my dfsadmin report once and help me to fix the problem.

 

here is the report

 

 

Configured Capacity: 4066320277504 (3.7 TB)
Present Capacity: 3690038079488 (3.36 TB)
DFS Remaining: 1233865269248 (1.12 TB)
DFS Used: 2456172810240 (2.23 TB)
DFS Used%: 66.56%
Under replicated blocks: 66
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 3 (3 total, 0 dead)

Name: 172.16.0.97:50010
Decommission Status : Normal
Configured Capacity: 1159672115200 (1.05 TB)
DFS Used: 807840399360 (752.36 GB)
Non DFS Used: 177503387648 (165.31 GB)
DFS Remaining: 174328328192(162.36 GB)
DFS Used%: 69.66%
DFS Remaining%: 15.03%
Last contact: Mon Aug 11 12:05:22 IST 2014


Name: 172.16.0.106:50010
Decommission Status : Normal
Configured Capacity: 1749056290816 (1.59 TB)
DFS Used: 833225805824 (776 GB)
Non DFS Used: 82293673984 (76.64 GB)
DFS Remaining: 833536811008(776.29 GB)
DFS Used%: 47.64%
DFS Remaining%: 47.66%
Last contact: Mon Aug 11 12:05:21 IST 2014


Name: 172.16.0.63:50010
Decommission Status : Normal
Configured Capacity: 1157591871488 (1.05 TB)
DFS Used: 815106605056 (759.13 GB)
Non DFS Used: 116485136384 (108.49 GB)
DFS Remaining: 226000130048(210.48 GB)
DFS Used%: 70.41%
DFS Remaining%: 19.52%
Last contact: Mon Aug 11 12:05:21 IST 2014

avatar

I fix the problem.

 

I come to know that it is not a flume issue it is purely HDFS issue, then i done the below steps

 

step1:

 

stop all the services

 

Step2:

 

started name node

 

then when am trying to start the data nodes on the 3 servers,one of the server throwig the error message 

/var/log/  ----No such file/directory

/var/run --No such file/directory

 

But these files are existing so i check the permissions on those two differ from second server to third server

 

So given the permission to those directories to  be in sink

 

and then started all the services then flume working fine,

 

thats it.

 

 

-Thankyou