Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

Who agreed with this topic

Failed to replace a bad datanode

Explorer

Hi folks,

 

We have a 16 node cluster with 3 Flume VMs handling the ingestion. All nodes are in good condition but we're getting the error listed below in each of the Flume logs. The only things I could find that would cause this is if you have a 1 node cluster and your replication is set to 3. Any ideas? Thanks for the help.

 

 

2015-07-17 07:25:09,432 WARN org.apache.flume.sink.hdfs.BucketWriter: Closing file: hdfs://nameservice1:8020/db/live/wifi_info/year=2015/month=07/day=10/_FlumeData.1436584835196.tmp failed. Will retry again in 180 seconds.

java.io.IOException: Failed to replace a bad datanode on the existing pipeline due to no more good datanodes being available to try. (Nodes: current=[10.100.55.65:50010, 10.100.55.62:50010], original=[10.100.55.65:50010, 10.100.55.62:50010]). The current failed datanode replacement policy is DEFAULT, and a client may configure this via 'dfs.client.block.write.replace-datanode-on-failure.policy' in its configuration.

        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.findNewDatanode(DFSOutputStream.java:960)

        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.addDatanode2ExistingPipeline(DFSOutputStream.java:1026)

        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.setupPipelineForAppendOrRecovery(DFSOutputStream.java:1175)

        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.processDatanodeError(DFSOutputStream.java:924)

        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:486)

2015-07-17 07:25:13,143 INFO org.apache.flume.sink.hdfs.HDFSDataStream: Serializer = TEXT, UseRawLocalFileSystem = false

Who agreed with this topic