- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Some questions with Flume
- Labels:
-
Apache Flume
-
HDFS
Created on 12-23-2018 11:29 AM - edited 09-16-2022 07:00 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
I wanted to use Flume to send a large amount of files to hadoop and I had the idea of using spool, but I have some questions like this:
1. When sending files to hadoop, the files in the spool are not moved anywhere, which makes me wonder if there is a new file in the spool, how does Flume recognize the old and new files?
2. How does Flume after uploading the file to hadoop, will the files in the spool be moved to another folder? Or does Flume have a mechanism to back up files?
3. I know that Flume has some properties that help work with regex, but I don't know if Flume supports sending files to hadoop and sorting those files into regex-based directories? If so, how do I do it?
4. Does Flume support sending files to hadoop and categorizing them into directories based on the date sent? (I have read that part in HDFS Sink but when I tried it failed)
5. While using Flume to send files to hadoop, can I fix the file contents such as adding file names into the data stream, or changing the ";" into "|"?
6. Can I use any API, or any tool to monitor Flume file transfer to hadoop? For example, during file transfer, see how many files have been transferred to hadoop or how many files have been successfully submitted and how many files sent to hadoop failed.
7. Does Flume record transaction logs with hadoop? For example, how many files have been uploaded to hadoop, ...
I know that I asked too much, but I am really confused with Flume and I really need your help. Look forward to your help. Thanks
Created 12-23-2018 01:06 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Please find the answers below:
1. When sending files to hadoop, the files in the spool are not moved anywhere, which makes me wonder if there is a new file in the spool, how does Flume recognize the old and new files?
Ans: The files get renamed and a suffix is added to the completely ingested file from spool dir, see the following configuration:
fileSuffix .COMPLETED Suffix to append to completely ingested files
2. How does Flume after uploading the file to hadoop, will the files in the spool be moved to another folder? Or does Flume have a mechanism to back up files?
Ans: Same as above, It is renamed with suffix.
3. I know that Flume has some properties that help work with regex, but I don't know if Flume supports sending files to hadoop and sorting those files into regex-based directories? If so, how do I do it?
Ans: You can use the HDFS directory path with certain formatting escape sequences that will replaced by the HDFS sink to generate a directory/file name to store the events.
For example to store the file in different directory based on dates
hdfs.path = /flume/%Y-%m-%d
For more detail on the escape sequence see the following link:
https://flume.apache.org/FlumeUserGuide.html#hdfs-sink
4. Does Flume support sending files to hadoop and categorizing them into directories based on the date sent? (I have read that part in HDFS Sink but when I tried it failed)
Ans: - If you give the configuration you are using , I can try to fix the issues with it.
5. While using Flume to send files to hadoop, can I fix the file contents such as adding file names into the data stream, or changing the ";" into "|"?
Ans:
If you just want to add the file name to the data, you should try following configuration for the spplodir source type:
basenameHeader false Whether to add a header storing the basename of the file.
basenameHeaderKey basename Header Key to use when appending basename of file to event header.
If you want to do regex replace , you will have to use Search and Replace Interceptor
You can specify the search regex and replace string. See the following link:
https://flume.apache.org/FlumeUserGuide.html#search-and-replace-interceptor
6. Can I use any API, or any tool to monitor Flume file transfer to hadoop? For example, during file transfer, see how many files have been transferred to hadoop or how many files have been successfully submitted and how many files sent to hadoop failed.
Ans: Not sure if anything available for spooldir, but you should see the Monitoring section and see if you can use something
https://flume.apache.org/FlumeUserGuide.html#monitoring
7. Does Flume record transaction logs with hadoop? For example, how many files have been uploaded to hadoop, ...
Ans: I don't think so but might need more research to see if you can track what all files has been written. You can check your spool dir for files sent.
Thanks
Bimal
Created 12-23-2018 01:06 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Please find the answers below:
1. When sending files to hadoop, the files in the spool are not moved anywhere, which makes me wonder if there is a new file in the spool, how does Flume recognize the old and new files?
Ans: The files get renamed and a suffix is added to the completely ingested file from spool dir, see the following configuration:
fileSuffix .COMPLETED Suffix to append to completely ingested files
2. How does Flume after uploading the file to hadoop, will the files in the spool be moved to another folder? Or does Flume have a mechanism to back up files?
Ans: Same as above, It is renamed with suffix.
3. I know that Flume has some properties that help work with regex, but I don't know if Flume supports sending files to hadoop and sorting those files into regex-based directories? If so, how do I do it?
Ans: You can use the HDFS directory path with certain formatting escape sequences that will replaced by the HDFS sink to generate a directory/file name to store the events.
For example to store the file in different directory based on dates
hdfs.path = /flume/%Y-%m-%d
For more detail on the escape sequence see the following link:
https://flume.apache.org/FlumeUserGuide.html#hdfs-sink
4. Does Flume support sending files to hadoop and categorizing them into directories based on the date sent? (I have read that part in HDFS Sink but when I tried it failed)
Ans: - If you give the configuration you are using , I can try to fix the issues with it.
5. While using Flume to send files to hadoop, can I fix the file contents such as adding file names into the data stream, or changing the ";" into "|"?
Ans:
If you just want to add the file name to the data, you should try following configuration for the spplodir source type:
basenameHeader false Whether to add a header storing the basename of the file.
basenameHeaderKey basename Header Key to use when appending basename of file to event header.
If you want to do regex replace , you will have to use Search and Replace Interceptor
You can specify the search regex and replace string. See the following link:
https://flume.apache.org/FlumeUserGuide.html#search-and-replace-interceptor
6. Can I use any API, or any tool to monitor Flume file transfer to hadoop? For example, during file transfer, see how many files have been transferred to hadoop or how many files have been successfully submitted and how many files sent to hadoop failed.
Ans: Not sure if anything available for spooldir, but you should see the Monitoring section and see if you can use something
https://flume.apache.org/FlumeUserGuide.html#monitoring
7. Does Flume record transaction logs with hadoop? For example, how many files have been uploaded to hadoop, ...
Ans: I don't think so but might need more research to see if you can track what all files has been written. You can check your spool dir for files sent.
Thanks
Bimal
Created 12-24-2018 12:00 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Thank you very much for helping me, but I have some questions to help:
1. If the files are not moved to another folder (like questions 1 and 2 I mentioned), when the folder is too many files, for example 1 billion files, the server is full, I have to do that what? Maybe I have to reconfigure with another spool folder?
2. This is the configuration file I wanted to mention in question 5
# Sources, channels, and sinks are defined per # agent name, in this case 'tier1'. tier1.sources = source1 tier1.channels = channel1 tier1.sinks = sink1 # For each source, channel, and sink, set # standard properties. # source details tier1.sources.source1.type = spooldir tier1.sources.source1.spoolDir = /data/diem tier1.sources.source1.fileHeader = false tier1.sources.source1.fileSuffix = .COMPLETED tier1.sources.source1.channels = channel1 tier1.sources.source1.interceptors = i1 tier1.sources.source1.interceptors.i1.type = regex_extractor tier1.sources.source1.interceptors.i1.regex = \\[(.*?)\\] tier1.sources.source1.interceptors.i1.serializers = s1 tier1.sources.source1.interceptors.i1.serializers.s1.type = org.apache.flume.interceptor.RegexExtractorInterceptorMillisSerializer tier1.sources.source1.interceptor.serializers.s1.name = timestamp tier1.sources.source1.serializers.s1.pattern = yyyy-MM-dd HH:mm:ss # channel details tier1.channels.channel1.type = file tier1.channels.channel1.capacity = 200000 tier1.channels.channel1.transactionCapacity = 1000 # sink details tier1.sinks.sink1.type = HDFS tier1.sinks.sink1.fileType = DataStream tier1.sinks.sink1.hdfs.writeFormat = Text tier1.sinks.sink1.channel = channel1 tier1.sinks.sink1.hdfs.path = hdfs://localhost:8020/user/cloudera/testFolder/%y-%m-%d/%H%M/%S tier1.sinks.sink1.round = true tier1.sinks.sink1.roundValue = 10 tier1.sinks.sink1.roundUnit = minute tier1.sinks.sink1.hdfs.rollSize = 268435456 tier1.sinks.sink1.rollInterval = 0 tier1.sinks.sink1.hdfs.batchSize = 10000
And this is an error in the log file
2018-12-24 11:56:03,065 ERROR org.apache.flume.sink.hdfs.HDFSEventSink: process failed java.lang.NullPointerException: Expected timestamp in the Flume event headers, but it was null at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204) at org.apache.flume.formatter.output.BucketPath.replaceShorthand(BucketPath.java:251) at org.apache.flume.formatter.output.BucketPath.escapeString(BucketPath.java:460) at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:368) at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67) at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145) at java.lang.Thread.run(Thread.java:745) 2018-12-24 11:56:03,069 ERROR org.apache.flume.SinkRunner: Unable to deliver event. Exception follows. org.apache.flume.EventDeliveryException: java.lang.NullPointerException: Expected timestamp in the Flume event headers, but it was null at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:451) at org.apache.flume.sink.DefaultSinkProcessor.process(DefaultSinkProcessor.java:67) at org.apache.flume.SinkRunner$PollingRunner.run(SinkRunner.java:145) at java.lang.Thread.run(Thread.java:745) Caused by: java.lang.NullPointerException: Expected timestamp in the Flume event headers, but it was null at com.google.common.base.Preconditions.checkNotNull(Preconditions.java:204) at org.apache.flume.formatter.output.BucketPath.replaceShorthand(BucketPath.java:251) at org.apache.flume.formatter.output.BucketPath.escapeString(BucketPath.java:460) at org.apache.flume.sink.hdfs.HDFSEventSink.process(HDFSEventSink.java:368) ... 3 more
And once again thank you for helping me answer these questions
Created 12-26-2018 01:36 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
1. 1. If the files are not moved to another folder (like questions 1 and 2 I mentioned), when the folder is too many files, for example 1 billion files, the server is full, I have to do that what? Maybe I have to reconfigure with another spool folder?
Ans: You can configure flume to delete it so that the files does not keep on accumulating in your directory.
deletePolicy never When to delete completed files: never or immediate
2. The error you are getting is due to the regex and pattern being incorrect.
This combination works :
tier1.sources.source1.interceptors.i1.regex = ^(?:\\[)(\\d\\d\\d\\d-\\d\\d-\\d\\d\\s\\d\\d:\\d\\d:\\d\\d)
tier1.sources.source1.interceptors.i1.serializers.s1.pattern = yyyy-MM-dd HH:mm:ss
With the above regex we are matching anything starting with [dddd-dd-dd dd:dd:dd and discarding the starting [ and picking the rest of the pattern. That captured data matches the pattern yyyy-MM-dd HH:mm:ss and it is correctly translated to timestamp.
So [2012-10-18 18:47:57] ... will be interpretted properly and converted into timestamp.
If the regex and pattern does not map then you will not get a timestamp in the header. With your regex the selected group does not matches to the pattern yyyy-MM-dd HH:mm:ss and hence the timestamp in header comes as Null and you get the exception.
Plesae let me know if you have any question.
Regards
Bimal
Created 12-26-2018 03:50 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Flume isn't really designed for transferring files of large sizes. It would be recommended for you to use oozie or an nfs gateway with cron to transfer files on a regular basis, especially if you want the file preserved in its entirety. One of the things that you will observe, is that if flume has any temporary transmission errors, it will attempt to resend parts of those files, which will result in duplicates (a standard and expected scenario when using flume), and so your resultant files in hdfs would have those duplicates within them. Additionally, when you do have interruptions, existing hdfs files are closed and new ones are opened.
-pd
Created 12-26-2018 07:11 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Created 03-13-2019 12:03 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi I hava the files moving from source to destination----
after the sending the files from source directory flume is making them as completed
in souce directory
ex : test1.completed
test2.completed
test3.completed
now my question is, if i get one more file with name as test1 in source folder -- it is throwing below error(how can i overwrite it) -- and process will be halted
2019-03-13 06:46:42,620 (pool-13-thread-1) [ERROR - org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:280)] FATAL: Spool Directory source flow: { spoolDir: /opt/mount1/FlowTest/factoryFlowPath }: Uncaught exception in SpoolDirectorySource thread. Restart or reconfigure Flume to continue processing.
java.lang.IllegalStateException: File name has been re-used with different files. Spooling assumptions violated for /opt/mount1/FlowTest/factoryFlowPath/IIPGSITIWS.InventecINVSH_TXT_FLOW_NOV_21_2018_14H_26m_32s.txt.COMPLETED
at org.apache.flume.client.avro.ReliableSpoolingFileEventReader.rollCurrentFile(ReliableSpoolingFileEventReader.java:463)
at org.apache.flume.client.avro.ReliableSpoolingFileEventReader.retireCurrentFile(ReliableSpoolingFileEventReader.java:414)
at org.apache.flume.client.avro.ReliableSpoolingFileEventReader.readEvents(ReliableSpoolingFileEventReader.java:326)
at org.apache.flume.source.SpoolDirectorySource$SpoolDirectoryRunnable.run(SpoolDirectorySource.java:250)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
is there any way to overwrite exisisting .COMPLETED method
please help on this
Created 03-13-2019 06:52 PM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
older, resolved thread.
As to your question, the error is clear as is the documentation, quoted
below:
"""
Spooling Directory Source
This source lets you ingest data by placing files to be ingested into a
“spooling” directory on disk. This source will watch the specified
directory for new files, and will parse events out of new files as they
appear. The event parsing logic is pluggable. After a given file has been
fully read into the channel, it is renamed to indicate completion (or
optionally deleted).
Unlike the Exec source, this source is reliable and will not miss data,
even if Flume is restarted or killed. In exchange for this reliability,
only immutable, uniquely-named files must be dropped into the spooling
directory. Flume tries to detect these problem conditions and will fail
loudly if they are violated:
If a file is written to after being placed into the spooling directory,
Flume will print an error to its log file and stop processing.
If a file name is reused at a later time, Flume will print an error to its
log file and stop processing.
""" -
https://archive.cloudera.com/cdh5/cdh/5/flume-ng/FlumeUserGuide.html#spooling-directory-source
It appears that you can get around this by using ExecSource with a script
or command that reads the files, but you'll have to sacrifice reliability.
It may be instead worth investing in an approach that makes filenames
unique (`uuidgen` named softlinks in another folder, etc.)
