<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>question Re: While running flume agent facing some error in Archives of Support Questions (Read Only)</title>
    <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/While-running-flume-agent-facing-some-error/m-p/40905#M27139</link>
    <description>Hi, Thanks a lot for your reply&lt;BR /&gt;&lt;BR /&gt;As you said i want to define two more sinks like sink 1 and sink 2 and sink 3&lt;BR /&gt;&lt;BR /&gt;for example this is my config file for sink:&lt;BR /&gt;&lt;BR /&gt;# Please paste flume.conf here. Example:&lt;BR /&gt;# Sources, channels, and sinks are defined per&lt;BR /&gt;# agent name, in this case 'tier1'.&lt;BR /&gt;tier1.sources = source1&lt;BR /&gt;tier1.channels = channel1&lt;BR /&gt;tier1.sinks = sink1&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;tier1.sources.source1.type = avro&lt;BR /&gt;tier1.sources.source1.bind= 192.168.4.110&lt;BR /&gt;tier1.sources.source1.port= 8021&lt;BR /&gt;tier1.sources.source1.channels = channel1&lt;BR /&gt;tier1.channels.channel1.type= memory&lt;BR /&gt;tier1.sinks.sink1.type = hdfs&lt;BR /&gt;tier1.sinks.sink1.channel = channel1&lt;BR /&gt;tier1.sinks.sink1.hdfs.path = hdfs://192.168.4.110:8021/user/hadoop/flumelogs/&lt;BR /&gt;tier1.sinks.sink1.hdfs.fileType = DataStream&lt;BR /&gt;tier1.sinks.sink1.hdfs.writeFormat= Text&lt;BR /&gt;tier1.sinks.sink1.hdfs.batchSize = 100&lt;BR /&gt;tier1.sinks.sink1.hdfs.rollCount = 0&lt;BR /&gt;tier1.sinks.sink1.hdfs.rollSize = 73060835&lt;BR /&gt;tier1.sinks.sink1.hdfs.rollInterval = 0&lt;BR /&gt;&lt;BR /&gt;# Other properties are specific to each type of&lt;BR /&gt;# source, channel, or sink. In this case, we&lt;BR /&gt;# specify the capacity of the memory channel.&lt;BR /&gt;tier1.channels.channel1.capacity = 10000&lt;BR /&gt;tier1.channels.channel1.transactionCapacity = 1000&lt;BR /&gt;&lt;BR /&gt;so i want to add two more like this&lt;BR /&gt;&lt;BR /&gt;tier1.sinks.sink2.type = hdfs&lt;BR /&gt;tier1.sinks.sink2.channel = channel1&lt;BR /&gt;tier1.sinks.sink2.hdfs.path = hdfs://192.168.4.110:8021/user/hadoop/flumelogs/&lt;BR /&gt;tier1.sinks.sink2.hdfs.fileType = DataStream&lt;BR /&gt;tier1.sinks.sink2.hdfs.writeFormat= Text&lt;BR /&gt;tier1.sinks.sink2.hdfs.batchSize = 100&lt;BR /&gt;tier1.sinks.sink2.hdfs.rollCount = 0&lt;BR /&gt;tier1.sinks.sink2.hdfs.rollSize = 73060835&lt;BR /&gt;tier1.sinks.sink2.hdfs.rollInterval = 0&lt;BR /&gt;&lt;BR /&gt;tier1.sinks.sink3.type = hdfs&lt;BR /&gt;tier1.sinks.sink3.channel = channel1&lt;BR /&gt;tier1.sinks.sink3.hdfs.path = hdfs://192.168.4.110:8021/user/hadoop/flumelogs/&lt;BR /&gt;tier1.sinks.sink3.hdfs.fileType = DataStream&lt;BR /&gt;tier1.sinks.sink3.hdfs.writeFormat= Text&lt;BR /&gt;tier1.sinks.sink3.hdfs.batchSize = 100&lt;BR /&gt;tier1.sinks.sink3.hdfs.rollCount = 0&lt;BR /&gt;tier1.sinks.sink3.hdfs.rollSize = 73060835&lt;BR /&gt;tier1.sinks.sink3.hdfs.rollInterval = 0&lt;BR /&gt;&lt;BR /&gt;Thanks in advance.</description>
    <pubDate>Tue, 17 May 2016 06:21:28 GMT</pubDate>
    <dc:creator>Tejaponnaluru</dc:creator>
    <dc:date>2016-05-17T06:21:28Z</dc:date>
    <item>
      <title>While running flume agent facing some error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/While-running-flume-agent-facing-some-error/m-p/40443#M27134</link>
      <description>&lt;P&gt;Hi, Guys&lt;/P&gt;&lt;P&gt;while running my flume agents getting some error&lt;/P&gt;&lt;P&gt;Here My error:&lt;BR /&gt;&lt;STRONG&gt;Avro source source1: Unable to process event batch. Exception follows.&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;org.apache.flume.ChannelException: Unable to put batch on required channel: org.apache.flume.channel.MemoryChannel{name: channel1}&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:200)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.flume.source.AvroSource.appendBatch(AvroSource.java:386)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at java.lang.reflect.Method.invoke(Method.java:601)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.avro.ipc.specific.SpecificResponder.respond(SpecificResponder.java:91)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.avro.ipc.Responder.respond(Responder.java:151)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.messageReceived(NettyServer.java:188)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:173)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:310)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:555)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at java.lang.Thread.run(Thread.java:722)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;Caused by: org.apache.flume.ChannelFullException: Space for commit to queue couldn't be acquired. Sinks are likely not keeping up with sources, or the buffer size is too tight&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.flume.channel.MemoryChannel$MemoryTransaction.doCommit(MemoryChannel.java:130)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:192)&lt;/STRONG&gt;&lt;BR /&gt;&lt;STRONG&gt;... 30 more&lt;/STRONG&gt;&lt;/P&gt;&lt;P&gt;and here my config files&lt;/P&gt;&lt;P&gt;my local config:&lt;/P&gt;&lt;P&gt;agent.sources = localsource&lt;BR /&gt;agent.channels = memoryChannel&lt;BR /&gt;agent.sinks = avro_Sink&lt;/P&gt;&lt;P&gt;agent.sources.localsource.type = spooldir&lt;BR /&gt;#agent.sources.localsource.shell = /bin/bash -c&lt;BR /&gt;agent.sources.localsource.spoolDir = /home/dwh/teja/Flumedata/&lt;BR /&gt;agent.sources.localsource.fileHeader = true&lt;/P&gt;&lt;P&gt;# The channel can be defined as follows.&lt;BR /&gt;agent.sources.localsource.channels = memoryChannel&lt;/P&gt;&lt;P&gt;# Each sink's type must be defined&lt;BR /&gt;agent.sinks.avro_Sink.type = avro&lt;BR /&gt;agent.sinks.avro_Sink.hostname=192.168.4.110&lt;BR /&gt;agent.sinks.avro_Sink.port= 8021&lt;/P&gt;&lt;P&gt;agent.sinks.avro_Sink.avro.batchSize = 100&lt;BR /&gt;agent.sinks.avro_Sink.avro.rollCount = 0&lt;BR /&gt;agent.sinks.avro_Sink.avro.rollSize = 73060831&lt;BR /&gt;agent.sinks.avro_Sink.avro.rollInterval = 0&lt;/P&gt;&lt;P&gt;agent.sources.localsource.interceptors = search-replace&lt;BR /&gt;agent.sources.localsource.interceptors.search-replace.type = search_replace&lt;/P&gt;&lt;P&gt;# Remove leading alphanumeric characters in an event body.&lt;BR /&gt;agent.sources.localsource.interceptors.search-replace.searchPattern = ###|##&lt;BR /&gt;agent.sources.localsource.interceptors.search-replace.replaceString = |&lt;/P&gt;&lt;P&gt;#Specify the channel the sink should use&lt;BR /&gt;agent.sinks.avro_Sink.channel = memoryChannel&lt;/P&gt;&lt;P&gt;# Each channel's type is defined.&lt;BR /&gt;agent.channels.memoryChannel.type = memory&lt;/P&gt;&lt;P&gt;agent.channels.memoryChannel.capacity = 10000&lt;BR /&gt;agent.channels.memoryChannel.transactionCapacity = 1000&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;my server X config file&lt;/P&gt;&lt;P&gt;tier1.sources = source1&lt;BR /&gt;tier1.channels = channel1&lt;BR /&gt;tier1.sinks = sink1&lt;/P&gt;&lt;P&gt;tier1.sources.source1.type = avro&lt;BR /&gt;tier1.sources.source1.bind= 192.168.4.110&lt;BR /&gt;tier1.sources.source1.port= 8021&lt;BR /&gt;tier1.sources.source1.channels = channel1&lt;BR /&gt;tier1.channels.channel1.type= memory&lt;BR /&gt;tier1.sinks.sink1.type = hdfs&lt;BR /&gt;tier1.sinks.sink1.channel = channel1&lt;BR /&gt;tier1.sinks.sink1.hdfs.path = hdfs://192.168.4.110:8021/user/hadoop/flumelogs/&lt;BR /&gt;tier1.sinks.sink1.hdfs.fileType = DataStream&lt;BR /&gt;tier1.sinks.sink1.hdfs.writeFormat= Text&lt;BR /&gt;tier1.sinks.sink1.hdfs.batchSize = 100&lt;BR /&gt;tier1.sinks.sink1.hdfs.rollCount = 0&lt;BR /&gt;tier1.sinks.sink1.hdfs.rollSize = 73060831&lt;BR /&gt;tier1.sinks.sink1.hdfs.rollInterval = 0&lt;/P&gt;&lt;P&gt;tier1.channels.channel1.capacity = 10000&lt;BR /&gt;tier1.channels.channel1.transactionCapacity = 1000&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Please help if any one familiar with this.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks in advance.&lt;/P&gt;</description>
      <pubDate>Fri, 16 Sep 2022 10:16:58 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/While-running-flume-agent-facing-some-error/m-p/40443#M27134</guid>
      <dc:creator>Tejaponnaluru</dc:creator>
      <dc:date>2022-09-16T10:16:58Z</dc:date>
    </item>
    <item>
      <title>Re: While running flume agent facing some error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/While-running-flume-agent-facing-some-error/m-p/40538#M27135</link>
      <description>&lt;P&gt;As noted in your other similar post, one of two thing happened:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;1. &amp;nbsp;Your single sink is not keeping up with the source, you need to add more sinks to pull from the same channel&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;or&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;2. &amp;nbsp;You had an error in your sink that caused it to stop delivering to hdfs. &amp;nbsp;You should review the logs for the first error (prior to the channel full exceptions). &amp;nbsp;Often times restarting will resolve the issue. &amp;nbsp;If you add additional sinks, that will help as well, because failure of one sink won't prevent the other sinks from pulling events off the channel.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;-pd&lt;/P&gt;</description>
      <pubDate>Fri, 06 May 2016 15:55:05 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/While-running-flume-agent-facing-some-error/m-p/40538#M27135</guid>
      <dc:creator>pdvorak</dc:creator>
      <dc:date>2016-05-06T15:55:05Z</dc:date>
    </item>
    <item>
      <title>Re: While running flume agent facing some error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/While-running-flume-agent-facing-some-error/m-p/40591#M27136</link>
      <description>Hi, Thanks a lot for your reply&lt;BR /&gt;&lt;BR /&gt;As You said i want to add more sinks for that channel is that possible if so how? and my problem is i want to write all logs in to one file in hdfs so if i use multiple sinks is that possible to write all logs into one single file?&lt;BR /&gt;&lt;BR /&gt;Thanks in advance&lt;BR /&gt;</description>
      <pubDate>Mon, 09 May 2016 05:24:43 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/While-running-flume-agent-facing-some-error/m-p/40591#M27136</guid>
      <dc:creator>Tejaponnaluru</dc:creator>
      <dc:date>2016-05-09T05:24:43Z</dc:date>
    </item>
    <item>
      <title>Re: While running flume agent facing some error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/While-running-flume-agent-facing-some-error/m-p/40594#M27137</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; As you said i checked log first error&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; This is that error&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Avro source source1: Unable to process event batch. Exception follows.&lt;BR /&gt;org.apache.flume.ChannelException: Unable to put batch on required channel: org.apache.flume.channel.MemoryChannel{name: channel1}&lt;BR /&gt;at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:200)&lt;BR /&gt;at org.apache.flume.source.AvroSource.appendBatch(AvroSource.java:386)&lt;BR /&gt;at sun.reflect.GeneratedMethodAccessor1.invoke(Unknown Source)&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:601)&lt;BR /&gt;at org.apache.avro.ipc.specific.SpecificResponder.respond(SpecificResponder.java:91)&lt;BR /&gt;at org.apache.avro.ipc.Responder.respond(Responder.java:151)&lt;BR /&gt;at org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.messageReceived(NettyServer.java:188)&lt;BR /&gt;at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)&lt;BR /&gt;at org.apache.avro.ipc.NettyServer$NettyServerAvroHandler.handleUpstream(NettyServer.java:173)&lt;BR /&gt;at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)&lt;BR /&gt;at org.jboss.netty.channel.DefaultChannelPipeline$DefaultChannelHandlerContext.sendUpstream(DefaultChannelPipeline.java:787)&lt;BR /&gt;at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:296)&lt;BR /&gt;at org.jboss.netty.handler.codec.frame.FrameDecoder.unfoldAndFireMessageReceived(FrameDecoder.java:462)&lt;BR /&gt;at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:443)&lt;BR /&gt;at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)&lt;BR /&gt;at org.jboss.netty.channel.SimpleChannelUpstreamHandler.handleUpstream(SimpleChannelUpstreamHandler.java:70)&lt;BR /&gt;at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:560)&lt;BR /&gt;at org.jboss.netty.channel.DefaultChannelPipeline.sendUpstream(DefaultChannelPipeline.java:555)&lt;BR /&gt;at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)&lt;BR /&gt;at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)&lt;BR /&gt;at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)&lt;BR /&gt;at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)&lt;BR /&gt;at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)&lt;BR /&gt;at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)&lt;BR /&gt;at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)&lt;BR /&gt;at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108)&lt;BR /&gt;at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)&lt;BR /&gt;at java.lang.Thread.run(Thread.java:722)&lt;BR /&gt;Caused by: org.apache.flume.ChannelFullException: Space for commit to queue couldn't be acquired. Sinks are likely not keeping up with sources, or the buffer size is too tight&lt;BR /&gt;at org.apache.flume.channel.MemoryChannel$MemoryTransaction.doCommit(MemoryChannel.java:130)&lt;BR /&gt;at org.apache.flume.channel.BasicTransactionSemantics.commit(BasicTransactionSemantics.java:151)&lt;BR /&gt;at org.apache.flume.channel.ChannelProcessor.processEventBatch(ChannelProcessor.java:192)&lt;BR /&gt;... 30 more&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;This&amp;nbsp; the full error and i checked that charts that channel reaching 100 % so what to do now for clearing this.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;Thanks in advance.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Mon, 09 May 2016 07:02:15 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/While-running-flume-agent-facing-some-error/m-p/40594#M27137</guid>
      <dc:creator>Tejaponnaluru</dc:creator>
      <dc:date>2016-05-09T07:02:15Z</dc:date>
    </item>
    <item>
      <title>Re: While running flume agent facing some error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/While-running-flume-agent-facing-some-error/m-p/40775#M27138</link>
      <description>Add 3 more hdfs sinks, all using the same channel. Be sure to add hdfs.filePrefix with a unique value per sink to avoid filename collision. Hopefully that will deliver the events fast enough to keep up.&lt;BR /&gt;&lt;BR /&gt;-pd</description>
      <pubDate>Thu, 12 May 2016 14:06:16 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/While-running-flume-agent-facing-some-error/m-p/40775#M27138</guid>
      <dc:creator>pdvorak</dc:creator>
      <dc:date>2016-05-12T14:06:16Z</dc:date>
    </item>
    <item>
      <title>Re: While running flume agent facing some error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/While-running-flume-agent-facing-some-error/m-p/40905#M27139</link>
      <description>Hi, Thanks a lot for your reply&lt;BR /&gt;&lt;BR /&gt;As you said i want to define two more sinks like sink 1 and sink 2 and sink 3&lt;BR /&gt;&lt;BR /&gt;for example this is my config file for sink:&lt;BR /&gt;&lt;BR /&gt;# Please paste flume.conf here. Example:&lt;BR /&gt;# Sources, channels, and sinks are defined per&lt;BR /&gt;# agent name, in this case 'tier1'.&lt;BR /&gt;tier1.sources = source1&lt;BR /&gt;tier1.channels = channel1&lt;BR /&gt;tier1.sinks = sink1&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;tier1.sources.source1.type = avro&lt;BR /&gt;tier1.sources.source1.bind= 192.168.4.110&lt;BR /&gt;tier1.sources.source1.port= 8021&lt;BR /&gt;tier1.sources.source1.channels = channel1&lt;BR /&gt;tier1.channels.channel1.type= memory&lt;BR /&gt;tier1.sinks.sink1.type = hdfs&lt;BR /&gt;tier1.sinks.sink1.channel = channel1&lt;BR /&gt;tier1.sinks.sink1.hdfs.path = hdfs://192.168.4.110:8021/user/hadoop/flumelogs/&lt;BR /&gt;tier1.sinks.sink1.hdfs.fileType = DataStream&lt;BR /&gt;tier1.sinks.sink1.hdfs.writeFormat= Text&lt;BR /&gt;tier1.sinks.sink1.hdfs.batchSize = 100&lt;BR /&gt;tier1.sinks.sink1.hdfs.rollCount = 0&lt;BR /&gt;tier1.sinks.sink1.hdfs.rollSize = 73060835&lt;BR /&gt;tier1.sinks.sink1.hdfs.rollInterval = 0&lt;BR /&gt;&lt;BR /&gt;# Other properties are specific to each type of&lt;BR /&gt;# source, channel, or sink. In this case, we&lt;BR /&gt;# specify the capacity of the memory channel.&lt;BR /&gt;tier1.channels.channel1.capacity = 10000&lt;BR /&gt;tier1.channels.channel1.transactionCapacity = 1000&lt;BR /&gt;&lt;BR /&gt;so i want to add two more like this&lt;BR /&gt;&lt;BR /&gt;tier1.sinks.sink2.type = hdfs&lt;BR /&gt;tier1.sinks.sink2.channel = channel1&lt;BR /&gt;tier1.sinks.sink2.hdfs.path = hdfs://192.168.4.110:8021/user/hadoop/flumelogs/&lt;BR /&gt;tier1.sinks.sink2.hdfs.fileType = DataStream&lt;BR /&gt;tier1.sinks.sink2.hdfs.writeFormat= Text&lt;BR /&gt;tier1.sinks.sink2.hdfs.batchSize = 100&lt;BR /&gt;tier1.sinks.sink2.hdfs.rollCount = 0&lt;BR /&gt;tier1.sinks.sink2.hdfs.rollSize = 73060835&lt;BR /&gt;tier1.sinks.sink2.hdfs.rollInterval = 0&lt;BR /&gt;&lt;BR /&gt;tier1.sinks.sink3.type = hdfs&lt;BR /&gt;tier1.sinks.sink3.channel = channel1&lt;BR /&gt;tier1.sinks.sink3.hdfs.path = hdfs://192.168.4.110:8021/user/hadoop/flumelogs/&lt;BR /&gt;tier1.sinks.sink3.hdfs.fileType = DataStream&lt;BR /&gt;tier1.sinks.sink3.hdfs.writeFormat= Text&lt;BR /&gt;tier1.sinks.sink3.hdfs.batchSize = 100&lt;BR /&gt;tier1.sinks.sink3.hdfs.rollCount = 0&lt;BR /&gt;tier1.sinks.sink3.hdfs.rollSize = 73060835&lt;BR /&gt;tier1.sinks.sink3.hdfs.rollInterval = 0&lt;BR /&gt;&lt;BR /&gt;Thanks in advance.</description>
      <pubDate>Tue, 17 May 2016 06:21:28 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/While-running-flume-agent-facing-some-error/m-p/40905#M27139</guid>
      <dc:creator>Tejaponnaluru</dc:creator>
      <dc:date>2016-05-17T06:21:28Z</dc:date>
    </item>
    <item>
      <title>Re: While running flume agent facing some error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/While-running-flume-agent-facing-some-error/m-p/40907#M27140</link>
      <description>&lt;P&gt;Hi , As you said i tried like this&lt;BR /&gt;&lt;BR /&gt;# Please paste flume.conf here. Example:&lt;BR /&gt;# Sources, channels, and sinks are defined per&lt;BR /&gt;# agent name, in this case 'tier1'.&lt;BR /&gt;tier1.sources = source1&lt;BR /&gt;tier1.channels = channel1&lt;BR /&gt;tier1.sinks = sink1 sink2 sink3&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;tier1.sources.source1.type = avro&lt;BR /&gt;tier1.sources.source1.bind= 192.168.4.110&lt;BR /&gt;tier1.sources.source1.port= 8021&lt;BR /&gt;tier1.sources.source1.channels = channel1&lt;BR /&gt;tier1.channels.channel1.type= memory&lt;BR /&gt;tier1.sinks.sink1.type = hdfs&lt;BR /&gt;tier1.sinks.sink1.channel = channel1&lt;BR /&gt;tier1.sinks.sink1.hdfs.path = hdfs://192.168.4.110:8021/user/hadoop/flumelogs/&lt;BR /&gt;tier1.sinks.sink1.hdfs.filePrefix = Flumedata&lt;BR /&gt;tier1.sinks.sink1.hdfs.fileType = DataStream&lt;BR /&gt;tier1.sinks.sink1.hdfs.writeFormat= Text&lt;BR /&gt;tier1.sinks.sink1.hdfs.batchSize = 100&lt;BR /&gt;tier1.sinks.sink1.hdfs.rollCount = 0&lt;BR /&gt;tier1.sinks.sink1.hdfs.rollSize = 73060835&lt;BR /&gt;tier1.sinks.sink1.hdfs.rollInterval = 0&lt;BR /&gt;&lt;BR /&gt;tier1.sinks.sink2.type = hdfs&lt;BR /&gt;tier1.sinks.sink2.channel = channel1&lt;BR /&gt;tier1.sinks.sink2.hdfs.path = hdfs://192.168.4.110:8021/user/hadoop/flumelogs/&lt;BR /&gt;tier1.sinks.sink2.hdfs.filePrefix = Flumedata1&lt;BR /&gt;tier1.sinks.sink2.hdfs.fileType = DataStream&lt;BR /&gt;tier1.sinks.sink2.hdfs.writeFormat= Text&lt;BR /&gt;tier1.sinks.sink2.hdfs.batchSize = 100&lt;BR /&gt;tier1.sinks.sink2.hdfs.rollCount = 0&lt;BR /&gt;tier1.sinks.sink2.hdfs.rollSize = 73060835&lt;BR /&gt;tier1.sinks.sink2.hdfs.rollInterval = 0&lt;BR /&gt;&lt;BR /&gt;tier1.sinks.sink3.type = hdfs&lt;BR /&gt;tier1.sinks.sink3.channel = channel1&lt;BR /&gt;tier1.sinks.sink3.hdfs.path = hdfs://192.168.4.110:8021/user/hadoop/flumelogs/&lt;BR /&gt;tier1.sinks.sink3.hdfs.filePrefix = Flumedata2&lt;BR /&gt;tier1.sinks.sink3.hdfs.fileType = DataStream&lt;BR /&gt;tier1.sinks.sink3.hdfs.writeFormat= Text&lt;BR /&gt;tier1.sinks.sink3.hdfs.batchSize = 100&lt;BR /&gt;tier1.sinks.sink3.hdfs.rollCount = 0&lt;BR /&gt;tier1.sinks.sink3.hdfs.rollSize = 73060835&lt;BR /&gt;tier1.sinks.sink3.hdfs.rollInterval = 0&lt;BR /&gt;&lt;BR /&gt;# Other properties are specific to each type of&lt;BR /&gt;# source, channel, or sink. In this case, we&lt;BR /&gt;# specify the capacity of the memory channel.&lt;BR /&gt;tier1.channels.channel1.capacity = 10000&lt;BR /&gt;tier1.channels.channel1.transactionCapacity = 1000&lt;BR /&gt;&lt;BR /&gt;Still facing same error.&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.node.PollingPropertiesFileConfigurationProvider&lt;BR /&gt;&lt;BR /&gt;Configuration provider starting&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.node.PollingPropertiesFileConfigurationProvider&lt;BR /&gt;&lt;BR /&gt;Reloading configuration file:/data/var/run/cloudera-scm-agent/process/1660-flume-AGENT/flume.conf&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink1&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink3&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink2&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink3&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink2&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink2&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink1&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink2&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink1&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfiguration&lt;BR /&gt;&lt;BR /&gt;Added sinks: sink1 sink2 sink3 Agent: tier1&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink1&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink2&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink2&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink3&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink3&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink3&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink2&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink1&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink1&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink1&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink2&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink3&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink2&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink3&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink3&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink1&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink3&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink3&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink2&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink1&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfigurationProcessing:sink1&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.conf.FlumeConfiguration&lt;BR /&gt;&lt;BR /&gt;Post-validation flume configuration contains configuration for agents: [tier1]&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.node.AbstractConfigurationProviderCreating channels&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.channel.DefaultChannelFactoryCreating instance of channel channel1 type memory&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.node.AbstractConfigurationProviderCreated channel channel1&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.source.DefaultSourceFactoryCreating instance of source source1, type avro&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.sink.DefaultSinkFactoryCreating instance of sink: sink1, type: hdfs&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.sink.DefaultSinkFactoryCreating instance of sink: sink2, type: hdfs&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.sink.DefaultSinkFactoryCreating instance of sink: sink3, type: hdfs&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.node.AbstractConfigurationProvideChannel channel1 connected to [source1, sink1, sink2, sink3]&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.node.Application&lt;BR /&gt;&lt;BR /&gt;Starting new configuration:{ sourceRunners:{source1=EventDrivenSourceRunner: { source:Avro source source1: { bindAddress: 192.168.4.110, port: 8021 } }} sinkRunners:{sink1=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@672f151d counterGroup:{ name:null counters:{} } }, sink2=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@441357d7 counterGroup:{ name:null counters:{} } }, sink3=SinkRunner: { policy:org.apache.flume.sink.DefaultSinkProcessor@51ec072b counterGroup:{ name:null counters:{} } }} channels:{channel1=org.apache.flume.channel.MemoryChannel{name: channel1}} }&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.node.ApplicationStarting Channel channel1&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.instrumentation.MonitoredCounterGroupMonitored counter group for type: CHANNEL, name: channel1: Successfully registered new MBean.&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.instrumentation.MonitoredCounterGroup Component type: CHANNEL, name: channel1 started&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.node.ApplicationStarting Sink sink1&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.node.ApplicationStarting Sink sink2&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.node.Application&lt;BR /&gt;&lt;BR /&gt;Starting Sink sink3&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.node.Application&lt;BR /&gt;&lt;BR /&gt;Starting Source source1&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.source.AvroSource&lt;BR /&gt;&lt;BR /&gt;Starting Avro source source1: { bindAddress: 192.168.4.110, port: 8021 }...&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.instrumentation.MonitoredCounterGroup&lt;BR /&gt;&lt;BR /&gt;Monitored counter group for type: SINK, name: sink2: Successfully registered new MBean.&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.instrumentation.MonitoredCounterGroup&lt;BR /&gt;&lt;BR /&gt;Monitored counter group for type: SINK, name: sink1: Successfully registered new MBean.&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.instrumentation.MonitoredCounterGroup&lt;BR /&gt;&lt;BR /&gt;Component type: SINK, name: sink1 started&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.instrumentation.MonitoredCounterGroup&lt;BR /&gt;&lt;BR /&gt;Component type: SINK, name: sink2 started&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.instrumentation.MonitoredCounterGroup&lt;BR /&gt;&lt;BR /&gt;Monitored counter group for type: SINK, name: sink3: Successfully registered new MBean.&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.instrumentation.MonitoredCounterGroup&lt;BR /&gt;&lt;BR /&gt;Component type: SINK, name: sink3 started&lt;BR /&gt;&lt;BR /&gt;org.mortbay.log&lt;BR /&gt;&lt;BR /&gt;Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog&lt;BR /&gt;&lt;BR /&gt;org.mortbay.log&lt;BR /&gt;&lt;BR /&gt;jetty-6.1.26.cloudera.4&lt;BR /&gt;&lt;BR /&gt;org.mortbay.log&lt;BR /&gt;&lt;BR /&gt;Started SelectChannelConnector@0.0.0.0:41414&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.instrumentation.MonitoredCounterGroup&lt;BR /&gt;&lt;BR /&gt;Monitored counter group for type: SOURCE, name: source1: Successfully registered new MBean.&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.instrumentation.MonitoredCounterGroup&lt;BR /&gt;&lt;BR /&gt;Component type: SOURCE, name: source1 started&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.source.AvroSource&lt;BR /&gt;&lt;BR /&gt;Avro source source1 started.&lt;BR /&gt;&lt;BR /&gt;org.apache.avro.ipc.NettyServer&lt;BR /&gt;&lt;BR /&gt;[id: 0x077365c8, /192.168.6.118:60703 =&amp;gt; /192.168.4.110:8021] OPEN&lt;BR /&gt;&lt;BR /&gt;org.apache.avro.ipc.NettyServer [id: 0x077365c8, /192.168.6.118:60703 =&amp;gt; /192.168.4.110:8021] BOUND: /192.168.4.110:8021&lt;BR /&gt;&lt;BR /&gt;org.apache.avro.ipc.NettyServer [id: 0x077365c8, /192.168.6.118:60703 =&amp;gt; /192.168.4.110:8021] CONNECTED: /192.168.6.118:60703&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.sink.hdfs.HDFSDataStreamSerializer = TEXT, UseRawLocalFileSystem = false&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.sink.hdfs.HDFSDataStreamSerializer = TEXT, UseRawLocalFileSystem = false&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.sink.hdfs.HDFSDataStreamSerializer = TEXT, UseRawLocalFileSystem = false&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.sink.hdfs.BucketWriter Creating hdfs://192.168.4.110:8021/user/hadoop/flumelogs//Flumedata.1463466791008.tmporg.apache.avro.ipc.NettyServer[id: 0x7a41116a, /192.168.4.110:38630 =&amp;gt; /192.168.4.110:8021] OPEN&lt;BR /&gt;&lt;BR /&gt;org.apache.avro.ipc.NettyServer [id: 0x7a41116a, /192.168.4.110:38630 =&amp;gt; /192.168.4.110:8021] BOUND: /192.168.4.110:8021&lt;BR /&gt;&lt;BR /&gt;org.apache.avro.ipc.NettyServer [id: 0x7a41116a, /192.168.4.110:38630 =&amp;gt; /192.168.4.110:8021] CONNECTED: /192.168.4.110:38630&lt;BR /&gt;&lt;BR /&gt;org.apache.avro.ipc.NettyServer [id: 0x7a41116a, /192.168.4.110:38630 &lt;span class="lia-unicode-emoji" title=":grinning_squinting_face:"&gt;😆&lt;/span&gt; /192.168.4.110:8021] DISCONNECTED&lt;BR /&gt;&lt;BR /&gt;org.apache.avro.ipc.NettyServer&lt;BR /&gt;&lt;BR /&gt;Unexpected exception from downstream.&lt;BR /&gt;org.apache.avro.AvroRuntimeException: Excessively large list allocation request detected: 134352896 items! Connection closed.&lt;BR /&gt;at org.apache.avro.ipc.NettyTransportCodec$NettyFrameDecoder.decodePackHeader(NettyTransportCodec.java:167)&lt;BR /&gt;at org.apache.avro.ipc.NettyTransportCodec$NettyFrameDecoder.decode(NettyTransportCodec.java:139)&lt;BR /&gt;at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425)&lt;BR /&gt;at org.jboss.netty.handler.codec.frame.FrameDecoder.cleanup(FrameDecoder.java:482)&lt;BR /&gt;at org.jboss.netty.handler.codec.frame.FrameDecoder.channelDisconnected(FrameDecoder.java:365)&lt;BR /&gt;at org.jboss.netty.channel.Channels.fireChannelDisconnected(Channels.java:396)&lt;BR /&gt;at org.jboss.netty.channel.socket.nio.AbstractNioWorker.close(AbstractNioWorker.java:336)&lt;BR /&gt;at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.handleAcceptedSocket(NioServerSocketPipelineSink.java:81)&lt;BR /&gt;at org.jboss.netty.channel.socket.nio.NioServerSocketPipelineSink.eventSunk(NioServerSocketPipelineSink.java:36)&lt;BR /&gt;at org.jboss.netty.handler.codec.oneone.OneToOneEncoder.handleDownstream(OneToOneEncoder.java:54)&lt;BR /&gt;at org.jboss.netty.channel.Channels.close(Channels.java:812)&lt;BR /&gt;at org.jboss.netty.channel.AbstractChannel.close(AbstractChannel.java:197)&lt;BR /&gt;at org.apache.avro.ipc.NettyTransportCodec$NettyFrameDecoder.decodePackHeader(NettyTransportCodec.java:166)&lt;BR /&gt;at org.apache.avro.ipc.NettyTransportCodec$NettyFrameDecoder.decode(NettyTransportCodec.java:139)&lt;BR /&gt;at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425)&lt;BR /&gt;at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:310)&lt;BR /&gt;at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)&lt;BR /&gt;at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)&lt;BR /&gt;at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)&lt;BR /&gt;at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)&lt;BR /&gt;at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)&lt;BR /&gt;at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)&lt;BR /&gt;at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)&lt;BR /&gt;at java.lang.Thread.run(Thread.java:722)&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.sink.hdfs.HDFSEventSink&lt;BR /&gt;&lt;BR /&gt;HDFS IO error&lt;BR /&gt;java.io.EOFException: End of File Exception between local host is: "HadoopF02.hadoopslave1.com/192.168.4.110"; destination host is: "HadoopF02.hadoopslave1.com":8021; : java.io.EOFException; For more details see: &lt;A href="http://wiki.apache.org/hadoop/EOFException" target="_blank"&gt;http://wiki.apache.org/hadoop/EOFException&lt;/A&gt;&lt;BR /&gt;at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)&lt;BR /&gt;at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:57)&lt;BR /&gt;at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)&lt;BR /&gt;at java.lang.reflect.Constructor.newInstance(Constructor.java:525)&lt;BR /&gt;at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)&lt;BR /&gt;at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:764)&lt;BR /&gt;at org.apache.hadoop.ipc.Client.call(Client.java:1472)&lt;BR /&gt;at org.apache.hadoop.ipc.Client.call(Client.java:1399)&lt;BR /&gt;at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)&lt;BR /&gt;at $Proxy21.create(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.create(ClientNamenodeProtocolTranslatorPB.java:295)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)&lt;BR /&gt;at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)&lt;BR /&gt;at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)&lt;BR /&gt;at java.lang.reflect.Method.invoke(Method.java:601)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)&lt;BR /&gt;at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)&lt;BR /&gt;at $Proxy22.create(Unknown Source)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1738)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1662)&lt;BR /&gt;at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1587)&lt;BR /&gt;at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:397)&lt;BR /&gt;at org.apache.hadoop.hdfs.DistributedFileSystem$6.doCall(DistributedFileSystem.java:393)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)&lt;BR /&gt;at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:393)&lt;BR /&gt;at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:337)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:908)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:889)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:786)&lt;BR /&gt;at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:775)&lt;BR /&gt;at org.apache.flume.sink.hdfs.HDFSDataStream.doOpen(HDFSDataStream.java:86)&lt;BR /&gt;at org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:113)&lt;BR /&gt;at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:246)&lt;BR /&gt;at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:235)&lt;BR /&gt;at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:679)&lt;BR /&gt;at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)&lt;BR /&gt;at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:676)&lt;BR /&gt;at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)&lt;BR /&gt;at java.util.concurrent.FutureTask.run(FutureTask.java:166)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)&lt;BR /&gt;at java.lang.Thread.run(Thread.java:722)&lt;BR /&gt;Caused by: java.io.EOFException&lt;BR /&gt;at java.io.DataInputStream.readInt(DataInputStream.java:392)&lt;BR /&gt;at org.apache.hadoop.ipc.Client$Connection.receiveRpcResponse(Client.java:1071)&lt;BR /&gt;at org.apache.hadoop.ipc.Client$Connection.run(Client.java:966)&lt;BR /&gt;&lt;BR /&gt;org.apache.flume.sink.hdfs.BucketWriter Creating hdfs://192.168.4.110:8021/user/hadoop/flumelogs//Flumedata1.1463466791008.tmp&lt;BR /&gt;&lt;BR /&gt;org.apache.avro.ipc.NettyServer [id: 0x7a41116a, /192.168.4.110:38630 &lt;span class="lia-unicode-emoji" title=":grinning_squinting_face:"&gt;😆&lt;/span&gt; /192.168.4.110:8021] UNBOUND&lt;BR /&gt;&lt;BR /&gt;org.apache.avro.ipc.NettyServer [id: 0x7a41116a, /192.168.4.110:38630 &lt;span class="lia-unicode-emoji" title=":grinning_squinting_face:"&gt;😆&lt;/span&gt; /192.168.4.110:8021] CLOSED&lt;BR /&gt;&lt;BR /&gt;org.apache.avro.ipc.NettyServer&lt;BR /&gt;Connection to /192.168.4.110:38630 disconnected.&lt;BR /&gt;&lt;BR /&gt;org.apache.avro.ipc.NettyServer&lt;BR /&gt;&lt;BR /&gt;Unexpected exception from downstream.&lt;BR /&gt;org.apache.avro.AvroRuntimeException: Excessively large list allocation request detected: 150994944 items! Connection closed.&lt;BR /&gt;at org.apache.avro.ipc.NettyTransportCodec$NettyFrameDecoder.decodePackHeader(NettyTransportCodec.java:167)&lt;BR /&gt;at org.apache.avro.ipc.NettyTransportCodec$NettyFrameDecoder.decode(NettyTransportCodec.java:139)&lt;BR /&gt;at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425)&lt;BR /&gt;at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:310)&lt;BR /&gt;at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)&lt;BR /&gt;at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)&lt;BR /&gt;at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)&lt;BR /&gt;at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:107)&lt;BR /&gt;at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)&lt;BR /&gt;at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:88)&lt;BR /&gt;at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)&lt;BR /&gt;at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)&lt;BR /&gt;at java.lang.Thread.run(Thread.java:722)&lt;BR /&gt;&lt;BR /&gt;it continuing like this .&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;And this flume agent ruuning on data node is that any problem.&lt;BR /&gt;&lt;BR /&gt;Please help thanks in advance.&lt;/P&gt;</description>
      <pubDate>Tue, 17 May 2016 07:12:55 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/While-running-flume-agent-facing-some-error/m-p/40907#M27140</guid>
      <dc:creator>Tejaponnaluru</dc:creator>
      <dc:date>2016-05-17T07:12:55Z</dc:date>
    </item>
    <item>
      <title>Re: While running flume agent facing some error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/While-running-flume-agent-facing-some-error/m-p/41062#M27141</link>
      <description>&lt;P&gt;This error:&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;org.apache.avro.AvroRuntimeException: Excessively large list allocation request detected: 150994944 items! Connection closed.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Is usually caused when something upstream is trying to send non-avro data to the avro source. &amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;In your source config, you are specifying the avro source with the same port as the hdfs namenode port:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;tier1.sources.source1.type = avro&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;tier1.sources.source1.bind= 192.168.4.110&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;tier1.sources.source1.port= 8021&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;tier1.sources.source1.channels = channel1&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;tier1.channels.channel1.type= memory&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;tier1.sinks.sink1.type = hdfs&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;tier1.sinks.sink1.channel = channel1&lt;/SPAN&gt;&lt;BR /&gt;&lt;SPAN&gt;tier1.sinks.sink1.hdfs.path = hdfs://192.168.4.110:8021/user/hadoop/flumelogs/&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;I believe that will cause issues in your configuration, as the sink will try to connect to the avro source port as it thinks thats the namenode port. &amp;nbsp;If your namenode port is indeed 8021, then you need to change your avro source port to be something different.&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;&lt;SPAN&gt;-pd&lt;/SPAN&gt;&lt;/SPAN&gt;&lt;/P&gt;</description>
      <pubDate>Thu, 19 May 2016 18:16:39 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/While-running-flume-agent-facing-some-error/m-p/41062#M27141</guid>
      <dc:creator>pdvorak</dc:creator>
      <dc:date>2016-05-19T18:16:39Z</dc:date>
    </item>
    <item>
      <title>Re: While running flume agent facing some error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/While-running-flume-agent-facing-some-error/m-p/41080#M27142</link>
      <description>&lt;P&gt;Hi, I tried as you suggested&lt;BR /&gt;&lt;BR /&gt;i changed port num in hdfs path as&lt;BR /&gt;hdfs://192.168.4.110:8020/user/hadoop/flumelogs/&lt;BR /&gt;but facing same issue.&lt;BR /&gt;I'm thinking about flume sink is not writing fast due to network issue or less bandwidth is that correct?&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;and i have one more dout&amp;nbsp; i installed flume on data node is that causing any problem?&lt;/P&gt;&lt;P&gt;&lt;BR /&gt;Thanks in advance.&lt;/P&gt;</description>
      <pubDate>Fri, 20 May 2016 07:11:00 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/While-running-flume-agent-facing-some-error/m-p/41080#M27142</guid>
      <dc:creator>Tejaponnaluru</dc:creator>
      <dc:date>2016-05-20T07:11:00Z</dc:date>
    </item>
    <item>
      <title>Re: While running flume agent facing some error</title>
      <link>https://community.cloudera.com/t5/Archives-of-Support-Questions/While-running-flume-agent-facing-some-error/m-p/41086#M27143</link>
      <description>&lt;P&gt;Hi, Guys&lt;BR /&gt;&lt;BR /&gt;Its working fine. I changed ip address in sink path it's writting now.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;i changed hdfs://192.168.4.110:8020/user/hadoop/flumelogs/&amp;nbsp; this ip is data node ip and i changed to master node ip&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;&lt;P&gt;hdfs://192.168.4.112:8020/user/hadoop/flumelogs/&amp;nbsp; so it working fine, as my thinking flume can't right directly to data node.&lt;/P&gt;&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 20 May 2016 10:37:44 GMT</pubDate>
      <guid>https://community.cloudera.com/t5/Archives-of-Support-Questions/While-running-flume-agent-facing-some-error/m-p/41086#M27143</guid>
      <dc:creator>Tejaponnaluru</dc:creator>
      <dc:date>2016-05-20T10:37:44Z</dc:date>
    </item>
  </channel>
</rss>

