Support Questions
Find answers, ask questions, and share your expertise
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Who agreed with this topic

Failed to replace a bad datanode


Hi folks,


We have a 16 node cluster with 3 Flume VMs handling the ingestion. All nodes are in good condition but we're getting the error listed below in each of the Flume logs. The only things I could find that would cause this is if you have a 1 node cluster and your replication is set to 3. Any ideas? Thanks for the help.



2015-07-17 07:25:09,432 WARN org.apache.flume.sink.hdfs.BucketWriter: Closing file: hdfs://nameservice1:8020/db/live/wifi_info/year=2015/month=07/day=10/_FlumeData.1436584835196.tmp failed. Will retry again in 180 seconds. Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[,], original=[,]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(

        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(

        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(

        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(

        at org.apache.hadoop.hdfs.DFSOutputStream$

2015-07-17 07:25:13,143 INFO org.apache.flume.sink.hdfs.HDFSDataStream: Serializer = TEXT, UseRawLocalFileSystem = false

Who agreed with this topic