Support Questions

Find answers, ask questions, and share your expertise

How to read data from a file from Remote FTP Server and load the data into Hadoop using NIFI?

avatar

Hi All,

I want to load the Real Time Data (Text File) containing incremental data from FTP Server to hadoop. I tried Flume but i am getting File Not Found Exception and i am planning to use NIFI to load the data from FTP Server to Hadoop. Does anyone tried loading the data from single File in FTP Server to Hadoop. Please do the needful.

1 ACCEPTED SOLUTION

avatar
Guru

This is very straightforward with NiFi -- very common use case.

If the new data is in entire files, using GetFTP (or GetSFTP) processor and configure ftp host and port, path, regex of filename(s), polling frequency, whether to delete original (you can always archive it by forking to another processor), etc. Very easy to configure and implement, monitor, etc.

https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.GetSFTP/

If the new data are new lines in files (like log files) similar to above but use TailFile which will pick up new lines since last polling.

https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.TailFile/

On the put side, PutHDFS processor. You download core-site.xml and hdfs.xml from your cluster, put it in a filepath on your nifi cluster and reference that path in the processor config. With that, you then configure the hdfs path (xmls hold all connection details) to put the file ... maybe append a unique timestamp or uuid to filename to distinguish repeated ingests of identically named files.

https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.hadoop.PutHDFS/

View solution in original post

1 REPLY 1

avatar
Guru

This is very straightforward with NiFi -- very common use case.

If the new data is in entire files, using GetFTP (or GetSFTP) processor and configure ftp host and port, path, regex of filename(s), polling frequency, whether to delete original (you can always archive it by forking to another processor), etc. Very easy to configure and implement, monitor, etc.

https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.GetSFTP/

If the new data are new lines in files (like log files) similar to above but use TailFile which will pick up new lines since last polling.

https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.TailFile/

On the put side, PutHDFS processor. You download core-site.xml and hdfs.xml from your cluster, put it in a filepath on your nifi cluster and reference that path in the processor config. With that, you then configure the hdfs path (xmls hold all connection details) to put the file ... maybe append a unique timestamp or uuid to filename to distinguish repeated ingests of identically named files.

https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.hadoop.PutHDFS/