- Subscribe to RSS Feed
- Mark Question as New
- Mark Question as Read
- Float this Question for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
How to read data from a file from Remote FTP Server and load the data into Hadoop using NIFI?
- Labels:
-
Apache NiFi
Created ‎12-07-2016 04:56 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi All,
I want to load the Real Time Data (Text File) containing incremental data from FTP Server to hadoop. I tried Flume but i am getting File Not Found Exception and i am planning to use NIFI to load the data from FTP Server to Hadoop. Does anyone tried loading the data from single File in FTP Server to Hadoop. Please do the needful.
Created ‎12-07-2016 11:56 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This is very straightforward with NiFi -- very common use case.
If the new data is in entire files, using GetFTP (or GetSFTP) processor and configure ftp host and port, path, regex of filename(s), polling frequency, whether to delete original (you can always archive it by forking to another processor), etc. Very easy to configure and implement, monitor, etc.
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.GetSFTP/
If the new data are new lines in files (like log files) similar to above but use TailFile which will pick up new lines since last polling.
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.TailFile/
On the put side, PutHDFS processor. You download core-site.xml and hdfs.xml from your cluster, put it in a filepath on your nifi cluster and reference that path in the processor config. With that, you then configure the hdfs path (xmls hold all connection details) to put the file ... maybe append a unique timestamp or uuid to filename to distinguish repeated ingests of identically named files.
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.hadoop.PutHDFS/
Created ‎12-07-2016 11:56 AM
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
This is very straightforward with NiFi -- very common use case.
If the new data is in entire files, using GetFTP (or GetSFTP) processor and configure ftp host and port, path, regex of filename(s), polling frequency, whether to delete original (you can always archive it by forking to another processor), etc. Very easy to configure and implement, monitor, etc.
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.GetSFTP/
If the new data are new lines in files (like log files) similar to above but use TailFile which will pick up new lines since last polling.
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.TailFile/
On the put side, PutHDFS processor. You download core-site.xml and hdfs.xml from your cluster, put it in a filepath on your nifi cluster and reference that path in the processor config. With that, you then configure the hdfs path (xmls hold all connection details) to put the file ... maybe append a unique timestamp or uuid to filename to distinguish repeated ingests of identically named files.
https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.hadoop.PutHDFS/
