Member since
01-09-2014
283
Posts
70
Kudos Received
50
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1703 | 06-19-2019 07:50 AM | |
2726 | 05-01-2019 08:07 AM | |
2775 | 04-10-2019 08:49 AM | |
2677 | 03-20-2019 09:30 AM | |
2359 | 01-23-2019 10:58 AM |
05-26-2016
05:15 AM
Thank you for detail reply. I have initiated Flume as service on Edge node and its as expected.
... View more
05-25-2016
10:38 AM
This documentation goes over stopping and starting flume when not using Cloudera Manager. This assumes you are running packages and not parcels on this edge node: http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_ig_flume_run.html
... View more
05-20-2016
03:11 AM
Hi, Guys Its working fine. I changed ip address in sink path it's writting now. i changed hdfs://192.168.4.110:8020/user/hadoop/flumelogs/ this ip is data node ip and i changed to master node ip hdfs://192.168.4.112:8020/user/hadoop/flumelogs/ so it working fine, as my thinking flume can't right directly to data node.
... View more
05-06-2016
08:58 AM
Thank you that worked and helped me
... View more
04-12-2016
09:08 PM
Hi, Thanks for Your reply It's working fine i got it Before but forgot to update my answer. As You said need to remove single quotes and slashes then it's working fine. I used directly means, Instead of this ^\s*\#+|\#+$ I used direct ## to replace with pipe line symbol.
... View more
04-12-2016
08:12 AM
I'm note sure that I can change all the sources in order to post to all my Flume agents, but this is an interesting solution. Thank you!
... View more
04-03-2016
10:33 PM
Hi, As you said i'm using spooldir source it's working fine. But one problem is flume generating more files with less records but i want like one or two files. As i said before, i have 500 records log file i want to populate as one file this is just test case but in real scenario i have lakhs of records in one log file please help . my config file is same as above which i shared with spooldir source
... View more
03-31-2016
12:41 PM
The default maxLineLength for the LINE deserializer is 2048: http://archive.cloudera.com/cdh5/cdh/5/flume-ng/FlumeUserGuide.html#line You can set the following to accomodate your large events: agent.sources.axon_source.deserializer.maxLineLength=10000
... View more
02-11-2016
04:02 PM
1 Kudo
The Exec source is called with the ProcessBuilder: https://docs.oracle.com/javase/7/docs/api/java/lang/ProcessBuilder.html It inherits the environment of the current running flume process
... View more
01-05-2016
09:40 AM
1 Kudo
Using the CloudSolrServer instead of HttpSolrServer will allow the solrj client to load balance between the available solr servers, and is recommended in a Cloudera Search environment. The "No live SolrServers available to handle this request" is indicating a problem with the collection you are trying to update. I would suggest reviewing the currently online replicas, via http://solr.server:8983/#/~cloud and you should be able to see if all the replicas for your collection are online. You need at least one replica per shard to be the leader (as updates go to leaders and then get distributed to associated replicas). -PD
... View more