Reply
New Contributor
Posts: 1
Registered: ‎12-20-2018

Set Flume multi Agent configuration -- Unable to Send text file from multiple sever to one cloudera

[ Edited ]

Hi Team,

 

Please help on the below.

 

I have three node cloudera cluster server. 

 

Server - 10.0.0.1 - Name Node

Server - 10.0.0.2  - Data Node

Server - 10.0.0.3  - Data Node

 

I want to send Datanode logs from 10.0.0.2 & 10.0.0.3 server to 10.0.0.1 server using flume agent.

 

Below is my configuration file. After starting the agent in three servers, from only one server datanode logs is copying to 10.0.0.1 server, please check the below file and give suggestion.

 

I want to send logs from both server (10.0.0.2 & 10.0.0.3)  to 10.0.0.1

 

 

a1.sources=execSource
a1.channels=fileChannel
a1.sinks=k1

a1.sources.execSource.type = exec
a1.sources.execSource.command = tail -F /var/log/hadoop-hdfs/hadoop-cmf-hdfs-DATANODE-10.0.0.3.log.out
a1.sources.execSource.channels = fileChannel

a1.channels.fileChannel.type=memory


a1.sinks.k1.type=avro
a1.sinks.k1.channel=fileChannel
a1.sinks.k1.hostname=10.0.0.1
a1.sinks.k1.port=44556

**************************************************************************************


a2.sources=execSource
a2.channels=fileChannel
a2.sinks=k2

a2.sources.execSource.type = exec
a2.sources.execSource.command = tail -F /var/log/hadoop-hdfs/hadoop-cmf-hdfs-DATANODE-10.0.0.2.log.out
a2.sources.execSource.channels = fileChannel

a2.channels.fileChannel.type=memory


a2.sinks.k2.type=avro
a2.sinks.k2.channel=fileChannel
a2.sinks.k2.hostname=10.0.0.1
a2.sinks.k2.port=41414

 

*******************************************************************************************

collector.sources=av1 av2
collector.channels=c1 c2
collector.sinks=k1 k2

collector.sources.av1.type=avro
collector.sources.av1.bind=10.0.0.1
collector.sources.av1.port=44556
collector.sources.av1.channels=filechannel

collector.channels.c1.type=memory

collector.sinks.k1.type=hdfs
collector.sinks.k1.hdfs.path=hdfs://10.0.0.1:8020/flume
collector.sinks.k1.hdfs.fileType=DataStream
collector.sinks.k1.hdfs.writeType=text
collector.sinks.k1.hdfs.filePrefix=DataNode
collector.sinks.k1.hdfs.fileSuffix=.txt
collector.sinks.k1.hdfs.rollInterval=120
#collector.sinks.k1.hdfs.rollSize=0
collector.sinks.k1.hdfs.rollCount=0
collector.sinks.k1.channel=c1

collector.sources.av1.type=avro
collector.sources.av1.bind=10.0.0.1
collector.sources.av1.port=44556
collector.sources.av1.channels=c1

collector.channels.c2.type=memory

collector.sources.av2.type=avro
collector.sources.av2.bind=10.0.0.1
collector.sources.av2.port=41414
collector.sources.av2.channels=c2

collector.sinks.k2.type=hdfs
collector.sinks.k2.hdfs.path=hdfs://10.0.0.1:8020/flume
collector.sinks.k2.hdfs.fileType=DataStream
collector.sinks.k2.hdfs.writeType=text
collector.sinks.k2.hdfs.filePrefix=DataNode3
collector.sinks.k2.hdfs.fileSuffix=.txt
collector.sinks.k2.hdfs.rollInterval=120
collector.sinks.k2.hdfs.rollSize=0
collector.sinks.k2.hdfs.rollCount=0
collector.sinks.k2.channel=c2

 


18/12/20 12:35:04 INFO ipc.NettyServer: [id: 0x691073e4, /10.10.0.3:41210 => /10.160.0.2:44556] OPEN
18/12/20 12:35:04 INFO ipc.NettyServer: [id: 0x691073e4, /10.10.0.3:41210 => /10.160.0.2:44556] BOUND: /10.0.0.2:44556
18/12/20 12:35:04 INFO ipc.NettyServer: [id: 0x691073e4, /10.10.0.3:41210 => /10.160.0.2:44556] CONNECTED: /10.0.0.3:41210
18/12/20 12:35:07 INFO ipc.NettyServer: [id: 0xe682b8b8, /10.10.0.2:58074 => /10.160.0.2:41414] OPEN
18/12/20 12:35:07 INFO ipc.NettyServer: [id: 0xe682b8b8, /10.10.0.2:58074 => /10.160.0.2:41414] BOUND: /10.0.0.2:41414
18/12/20 12:35:07 INFO ipc.NettyServer: [id: 0xe682b8b8, /10.10.0.2:58074 => /10.160.0.2:41414] CONNECTED: /10.0.0.3:58074

18/12/20 12:35:13 INFO hdfs.HDFSDataStream: Serializer = TEXT, UseRawLocalFileSystem = false
18/12/20 12:35:13 INFO hdfs.BucketWriter: Creating hdfs://10.0.0.1:8020/flume/DataNode.1545309313129.txt.tmp
18/12/20 12:35:22 WARN hdfs.BucketWriter: Block Under-replication detected. Rotating file.
18/12/20 12:35:22 INFO hdfs.BucketWriter: Closing hdfs://10.0.0.1:8020/flume/DataNode.1545309313129.txt.tmp
18/12/20 12:35:23 INFO hdfs.BucketWriter: Renaming hdfs://10.0.0.1:8020/flume/DataNode.1545309313129.txt.tmp to hdfs://10.0.0.1:8020/flume/DataNode.1545309313129.txt

 

 

Announcements
New solutions