Support Questions
Find answers, ask questions, and share your expertise
Announcements
Alert: Welcome to the Unified Cloudera Community. Former HCC members be sure to read and learn how to activate your account here.

File load strategy for large files (per file volume greater than 1 TB)

File load strategy for large files (per file volume greater than 1 TB)

Rising Star

Hi,

What should be the strategy for loading files (Volume per file is more than 1 TB) in a reliable, fail-safe manner into HDFS?

 

Flume provides the fail-safety and reliability, but it is ideally meant for regularly-generated files into HDFS, my understanding is that it works fine for large no of file ingestion into HDFS ideally suitable for scenarios where data is generated in mini batches, but might not be efficient for single large file transfer into HDFS, please let me know if I am wrong here.

Also hadoop fs -put command cannot provide the fail safety, in case the transfer fails it won't restart the process

 

Regards,

Rajib

3 REPLIES 3

Re: File load strategy for large files (per file volume greater than 1 TB)

Super Collaborator
I would suggest using oozie with an ssh or shell action (depending on where these files are), you can create an ssh script that will allow you to push these files into hdfs with an 'hdfs dfs -put' command, and if that fails, you can set up the oozie workflow to send notifications.

Alternatively you could mount hdfs via nfs, and have a cron job that copies the files, and put all of your retry logic in there.

-pd

Re: File load strategy for large files (per file volume greater than 1 TB)

Rising Star

Thanks for your reply.Can we use Apache NiFi for  data loading? Do you  foresee any performance issue if we use Aapche NiFi?

Re: File load strategy for large files (per file volume greater than 1 TB)

Rising Star

As mentioned before, if you need to operate on the whole file, a flow with a few retries/notifications something like oozie on hadfoop fs -put makes sense.

 

If you have more flexibility you could look into a NiFi based solution where you grab the file piece by piece with TailFile as it is written. (NiFi can scale to any volume of files, but shines most with files/pieces that are somewhat smaller than 1 TB).