06-03-2016 03:13 AM
What should be the strategy for loading files (Volume per file is more than 1 TB) in a reliable, fail-safe manner into HDFS?
Flume provides the fail-safety and reliability, but it is ideally meant for regularly-generated files into HDFS, my understanding is that it works fine for large no of file ingestion into HDFS ideally suitable for scenarios where data is generated in mini batches, but might not be efficient for single large file transfer into HDFS, please let me know if I am wrong here.
Also hadoop fs -put command cannot provide the fail safety, in case the transfer fails it won't restart the process
06-03-2016 08:06 AM
05-06-2019 05:27 AM
As mentioned before, if you need to operate on the whole file, a flow with a few retries/notifications something like oozie on hadfoop fs -put makes sense.
If you have more flexibility you could look into a NiFi based solution where you grab the file piece by piece with TailFile as it is written. (NiFi can scale to any volume of files, but shines most with files/pieces that are somewhat smaller than 1 TB).