Reply
New Contributor
Posts: 7
Registered: ‎12-14-2018
Accepted Solution

Some questions with Flume

Hi,
I wanted to use Flume to send a large amount of files to hadoop and I had the idea of ​​using spool, but I have some questions like this:
1. When sending files to hadoop, the files in the spool are not moved anywhere, which makes me wonder if there is a new file in the spool, how does Flume recognize the old and new files?
2. How does Flume after uploading the file to hadoop, will the files in the spool be moved to another folder? Or does Flume have a mechanism to back up files?
3. I know that Flume has some properties that help work with regex, but I don't know if Flume supports sending files to hadoop and sorting those files into regex-based directories? If so, how do I do it?
4. Does Flume support sending files to hadoop and categorizing them into directories based on the date sent? (I have read that part in HDFS Sink but when I tried it failed)
5. While using Flume to send files to hadoop, can I fix the file contents such as adding file names into the data stream, or changing the ";" into "|"?
6. Can I use any API, or any tool to monitor Flume file transfer to hadoop? For example, during file transfer, see how many files have been transferred to hadoop or how many files have been successfully submitted and how many files sent to hadoop failed.
7. Does Flume record transaction logs with hadoop? For example, how many files have been uploaded to hadoop, ...
I know that I asked too much, but I am really confused with Flume and I really need your help. Look forward to your help. Thanks

Highlighted
Cloudera Employee
Posts: 56
Registered: ‎04-24-2017

Re: Some questions with Flume

Hi,

 

Please find the answers below:

 

1. When sending files to hadoop, the files in the spool are not moved anywhere, which makes me wonder if there is a new file in the spool, how does Flume recognize the old and new files?

Ans: The files get renamed and a suffix is added to the completely ingested file from spool dir, see the following configuration:

fileSuffix .COMPLETED Suffix to append to completely ingested files

 

2. How does Flume after uploading the file to hadoop, will the files in the spool be moved to another folder? Or does Flume have a mechanism to back up files?

Ans: Same as above, It is renamed with suffix.

 

3. I know that Flume has some properties that help work with regex, but I don't know if Flume supports sending files to hadoop and sorting those files into regex-based directories? If so, how do I do it?
Ans: You can use the HDFS directory path with certain formatting escape sequences that will replaced by the HDFS sink to generate a directory/file name to store the events.

For example to store the file in different directory based on dates

hdfs.path = /flume/%Y-%m-%d

For more detail on the escape sequence see the following link:

https://flume.apache.org/FlumeUserGuide.html#hdfs-sink

 

4. Does Flume support sending files to hadoop and categorizing them into directories based on the date sent? (I have read that part in HDFS Sink but when I tried it failed)
Ans: - If you give the configuration you are using , I can try to fix the issues with it.

 

5. While using Flume to send files to hadoop, can I fix the file contents such as adding file names into the data stream, or changing the ";" into "|"?
Ans:

If you just want to add the file name to the data, you should try following configuration for the spplodir source type:

basenameHeader false Whether to add a header storing the basename of the file.
basenameHeaderKey basename Header Key to use when appending basename of file to event header.

If you want to do regex replace , you will have to use Search and Replace Interceptor
You can specify the search regex and replace string. See the following link:

https://flume.apache.org/FlumeUserGuide.html#search-and-replace-interceptor

 

6. Can I use any API, or any tool to monitor Flume file transfer to hadoop? For example, during file transfer, see how many files have been transferred to hadoop or how many files have been successfully submitted and how many files sent to hadoop failed.
Ans: Not sure if anything available for spooldir, but you should see the Monitoring section and see if you can use something
https://flume.apache.org/FlumeUserGuide.html#monitoring


7. Does Flume record transaction logs with hadoop? For example, how many files have been uploaded to hadoop, ...
Ans: I don't think so but might need more research to see if you can track what all files has been written. You can check your spool dir for files sent.

 

 

 

Thanks 

Bimal

New Contributor
Posts: 7
Registered: ‎12-14-2018

Re: Some questions with Flume

Thank you very much for helping me, but I have some questions to help:
1. If the files are not moved to another folder (like questions 1 and 2 I mentioned), when the folder is too many files, for example 1 billion files, the server is full, I have to do that what? Maybe I have to reconfigure with another spool folder?
2. This is the configuration file I wanted to mention in question 5

# Sources, channels, and sinks are defined per
# agent name, in this case 'tier1'.
tier1.sources  = source1
tier1.channels = channel1
tier1.sinks    = sink1

# For each source, channel, and sink, set
# standard properties.
# source details
tier1.sources.source1.type     = spooldir
tier1.sources.source1.spoolDir = /data/diem
tier1.sources.source1.fileHeader = false
tier1.sources.source1.fileSuffix  = .COMPLETED
tier1.sources.source1.channels = channel1
tier1.sources.source1.interceptors = i1
tier1.sources.source1.interceptors.i1.type = regex_extractor
tier1.sources.source1.interceptors.i1.regex = \\[(.*?)\\]
tier1.sources.source1.interceptors.i1.serializers = s1
tier1.sources.source1.interceptors.i1.serializers.s1.type = org.apache.flume.interceptor.RegexExtractorInterceptorMillisSerializer
tier1.sources.source1.interceptor.serializers.s1.name = timestamp
tier1.sources.source1.serializers.s1.pattern = yyyy-MM-dd HH:mm:ss

# channel details
tier1.channels.channel1.type   = file
tier1.channels.channel1.capacity = 200000
tier1.channels.channel1.transactionCapacity = 1000

# sink details
tier1.sinks.sink1.type         = HDFS
tier1.sinks.sink1.fileType = DataStream
tier1.sinks.sink1.hdfs.writeFormat  = Text
tier1.sinks.sink1.channel      = channel1
tier1.sinks.sink1.hdfs.path = hdfs://localhost:8020/user/cloudera/testFolder/%y-%m-%d/%H%M/%S
tier1.sinks.sink1.round = true
tier1.sinks.sink1.roundValue = 10
tier1.sinks.sink1.roundUnit = minute
tier1.sinks.sink1.hdfs.rollSize = 268435456
tier1.sinks.sink1.rollInterval = 0
tier1.sinks.sink1.hdfs.batchSize = 10000

And this is an error in the log file

2018-12-24 11:56:03,065 ERROR org.apache.flume.sink.hdfs.HDFSEventSink: process failed
java.lang.NullPointerException: Expected timestamp in the Flume event headers, but it was null
    at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
    at org.apache.flume.formatter.output.BucketPath.replaceShorthand(BucketPath.java:251)
    at org.apache.flume.formatter.output.BucketPath.escapeString(BucketPath.java:460)
    at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:368)
    at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67)
    at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145)
    at java.lang.Thread.run(Thread.java:745)
2018-12-24 11:56:03,069 ERROR org.apache.flume.SinkRunner: Unable to deliver event. Exception follows.
org.apache.flume.EventDeliveryException: java.lang.NullPointerException: Expected timestamp in the Flume event headers, but it was null
    at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:451)
    at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67)
    at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145)
    at java.lang.Thread.run(Thread.java:745)
Caused by: java.lang.NullPointerException: Expected timestamp in the Flume event headers, but it was null
    at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204)
    at org.apache.flume.formatter.output.BucketPath.replaceShorthand(BucketPath.java:251)
    at org.apache.flume.formatter.output.BucketPath.escapeString(BucketPath.java:460)
    at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:368)
    ... 3 more

And once again thank you for helping me answer these questions

Cloudera Employee
Posts: 56
Registered: ‎04-24-2017

Re: Some questions with Flume

Hi,

 

1. 1. If the files are not moved to another folder (like questions 1 and 2 I mentioned), when the folder is too many files, for example 1 billion files, the server is full, I have to do that what? Maybe I have to reconfigure with another spool folder?

 

Ans: You can configure flume to delete it so that the files does not keep on accumulating in your directory. 


deletePolicy never When to delete completed files: never or immediate

 

2. The error you are getting is due to the regex and pattern being incorrect.

 

 

This combination works :

 

tier1.sources.source1.interceptors.i1.regex = ^(?:\\[)(\\d\\d\\d\\d-\\d\\d-\\d\\d\\s\\d\\d:\\d\\d:\\d\\d)
tier1.sources.source1.interceptors.i1.serializers.s1.pattern = yyyy-MM-dd HH:mm:ss

 

With the above regex we are matching anything starting with [dddd-dd-dd dd:dd:dd and discarding the starting [ and picking the rest of the pattern. That captured data matches the pattern yyyy-MM-dd HH:mm:ss and it is correctly translated to timestamp.

 

So [2012-10-18 18:47:57] ...  will be interpretted properly and converted into timestamp.

 

If the regex and pattern does not map then you will not get a timestamp in the header. With your regex the selected group does not matches to the pattern yyyy-MM-dd HH:mm:ss and hence the timestamp in header comes as Null and you get the exception.

 

Plesae let me know if you have any question.

 

Regards
Bimal

Cloudera Employee
Posts: 255
Registered: ‎01-09-2014

Re: Some questions with Flume

A word of caution:
Flume isn't really designed for transferring files of large sizes. It would be recommended for you to use oozie or an nfs gateway with cron to transfer files on a regular basis, especially if you want the file preserved in its entirety. One of the things that you will observe, is that if flume has any temporary transmission errors, it will attempt to resend parts of those files, which will result in duplicates (a standard and expected scenario when using flume), and so your resultant files in hdfs would have those duplicates within them. Additionally, when you do have interruptions, existing hdfs files are closed and new ones are opened.

-pd
New Contributor
Posts: 7
Registered: ‎12-14-2018

Re: Some questions with Flume

I got it, thank you very much :D
Announcements
New solutions