Support Questions

Find answers, ask questions, and share your expertise

Flume Agent not working in AWS

avatar

Hi, Guys

 

           I'm Trying to run flume in AWS, Because i want to populate log files in s3 Bucket so i config flume in AWS but while running it througing some error

 

2016-06-01 14:23:58,675 (main) [DEBUG - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:136)] Channels:channel1
2016-06-01 14:23:58,675 (main) [DEBUG - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:137)] Sinks sink1
2016-06-01 14:23:58,675 (main) [DEBUG - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:138)] Sources source1
2016-06-01 14:23:58,675 (main) [INFO - org.apache.flume.conf.FlumeConfiguration.validateConfiguration(FlumeConfiguration.java:141)] Post-validation flume configuration contains configuration for agents: [tier1]
2016-06-01 14:23:58,677 (main) [WARN - org.apache.flume.node.AbstractConfigurationProvider.getConfiguration(AbstractConfigurationProvider.java:133)] No configuration found for this host:agent
2016-06-01 14:23:58,677 (main) [INFO - org.apache.flume.node.Application.startAllComponents(Application.java:138)] Starting new configuration:{ sourceRunners:{} sinkRunners:{} channels:{} }
2016-06-01 14:23:58,697 (agent-shutdown-hook) [INFO - org.apache.flume.lifecycle.LifecycleSupervisor.stop(LifecycleSupervisor.java:79)] Stopping lifecycle supervisor 10
 
 If any one familiar of this Please Help
 
Thanks in advance
1 ACCEPTED SOLUTION

avatar
Mentor
Your configuration is written for an agent named "tier1" but your agent is
instead starting with name "agent". Thereby it does not pick up any configs
cause the configs are for another agent.

Are you using CM to manager Flume agents? You can specify per-agent names
in its CM -> Flume -> Configuration page if so, or alter the config of the
agent to use "agent" in place of "tier1".

If you're not using CM, then please read:
http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_flume_files.html#topic_12_8,
i.e. edit the file /etc/default/flume-ng-agent to change the agent's name
(currently "agent") to match the config "tier1", or conversely edit the
configuration under /etc/flume-ng/conf/flume.conf to use the agent's name
"agent" in place of "tier1".

View solution in original post

3 REPLIES 3

avatar
Mentor
Your configuration is written for an agent named "tier1" but your agent is
instead starting with name "agent". Thereby it does not pick up any configs
cause the configs are for another agent.

Are you using CM to manager Flume agents? You can specify per-agent names
in its CM -> Flume -> Configuration page if so, or alter the config of the
agent to use "agent" in place of "tier1".

If you're not using CM, then please read:
http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_flume_files.html#topic_12_8,
i.e. edit the file /etc/default/flume-ng-agent to change the agent's name
(currently "agent") to match the config "tier1", or conversely edit the
configuration under /etc/flume-ng/conf/flume.conf to use the agent's name
"agent" in place of "tier1".

avatar
Hi, Harsh

Its working fine but i forgot to update. As per you said agent name is the problem in starting command now i changed it to tier1 so its working good.

Thanks a lot for your reply.

avatar
Hi, Harsh

I configured flume on AWS to write in s3 bucket, while running flume agent it throwing some error

My flume sink config:

tier1.sinks.sink1.type = hdfs
tier1.sinks.sink1.channel = channel1
tier1.sinks.sink1.hdfs.path = s3n://ACCESS_KEY_ID:SECRET_ACCESS_KEY@bucketname/
tier1.sinks.sink1.hdfs.filePrefix = Flumedata
tier1.sinks.sink1.hdfs.fileType = DataStream
tier1.sinks.sink1.hdfs.writeFormat= Text
tier1.sinks.sink1.hdfs.batchSize = 100
tier1.sinks.sink1.hdfs.rollCount = 0
tier1.sinks.sink1.hdfs.rollSize = 73060835
tier1.sinks.sink1.hdfs.rollInterval = 0
#tier1.sinks.sink1.hdfs.idleTimeout = 180
#tier1.sinks.sink1.hdfs.closeTries = 0

and error:
2016-06-01 18:17:53,737 (SinkRunner-PollingRunner-DefaultSinkProcessor) [ERROR - org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:459)] process failed
java.lang.NoSuchMethodError: org.apache.http.impl.client.DefaultHttpClient.execute(Lorg/apache/http/client/methods/HttpUriRequest;)Lorg/apache/http/client/methods/CloseableHttpResponse;
at amazon.emr.metrics.ClientUtil.getInstanceId(ClientUtil.java:115)
at amazon.emr.metrics.MetricsConfig.getInstanceId(MetricsConfig.java:294)
at amazon.emr.metrics.MetricsConfig.<init>(MetricsConfig.java:195)
at amazon.emr.metrics.MetricsConfig.<init>(MetricsConfig.java:182)
at amazon.emr.metrics.MetricsConfig.<init>(MetricsConfig.java:177)
at amazon.emr.metrics.MetricsSaver.ensureSingleton(MetricsSaver.java:652)
at amazon.emr.metrics.MetricsSaver.addInternal(MetricsSaver.java:332)
at amazon.emr.metrics.MetricsSaver.addValue(MetricsSaver.java:178)
at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1667)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1692)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1627)
at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448)
at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:444)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:791)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:780)
at org.apache.flume.sink.hdfs.HDFSDataStream.doOpen(HDFSDataStream.java:86)
at org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:113)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:246)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:235)
at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:679)
at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)
at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:676)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Exception in thread "SinkRunner-PollingRunner-DefaultSinkProcessor" java.lang.NoSuchMethodError: org.apache.http.impl.client.DefaultHttpClient.execute(Lorg/apache/http/client/methods/HttpUriRequest;)Lorg/apache/http/client/methods/CloseableHttpResponse;
at amazon.emr.metrics.ClientUtil.getInstanceId(ClientUtil.java:115)
at amazon.emr.metrics.MetricsConfig.getInstanceId(MetricsConfig.java:294)
at amazon.emr.metrics.MetricsConfig.<init>(MetricsConfig.java:195)
at amazon.emr.metrics.MetricsConfig.<init>(MetricsConfig.java:182)
at amazon.emr.metrics.MetricsConfig.<init>(MetricsConfig.java:177)
at amazon.emr.metrics.MetricsSaver.ensureSingleton(MetricsSaver.java:652)
at amazon.emr.metrics.MetricsSaver.addInternal(MetricsSaver.java:332)
at amazon.emr.metrics.MetricsSaver.addValue(MetricsSaver.java:178)
at org.apache.hadoop.hdfs.DFSOutputStream.newStreamForCreate(DFSOutputStream.java:1667)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1692)
at org.apache.hadoop.hdfs.DFSClient.create(DFSClient.java:1627)
at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:448)
at org.apache.hadoop.hdfs.DistributedFileSystem$7.doCall(DistributedFileSystem.java:444)
at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:444)
at org.apache.hadoop.hdfs.DistributedFileSystem.create(DistributedFileSystem.java:387)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:913)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:894)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:791)
at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:780)
at org.apache.flume.sink.hdfs.HDFSDataStream.doOpen(HDFSDataStream.java:86)
at org.apache.flume.sink.hdfs.HDFSDataStream.open(HDFSDataStream.java:113)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:246)
at org.apache.flume.sink.hdfs.BucketWriter$1.call(BucketWriter.java:235)
at org.apache.flume.sink.hdfs.BucketWriter$9$1.run(BucketWriter.java:679)
at org.apache.flume.auth.SimpleAuthenticator.execute(SimpleAuthenticator.java:50)
at org.apache.flume.sink.hdfs.BucketWriter$9.call(BucketWriter.java:676)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

please help if you are familiar with this.

Thanks in advance