Member since
06-15-2016
28
Posts
0
Kudos Received
0
Solutions
12-07-2016
11:56 AM
1 Kudo
This is very straightforward with NiFi -- very common use case. If the new data is in entire files, using GetFTP (or GetSFTP) processor and configure ftp host and port, path, regex of filename(s), polling frequency, whether to delete original (you can always archive it by forking to another processor), etc. Very easy to configure and implement, monitor, etc. https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.GetSFTP/ If the new data are new lines in files (like log files) similar to above but use TailFile which will pick up new lines since last polling. https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.standard.TailFile/ On the put side, PutHDFS processor. You download core-site.xml and hdfs.xml from your cluster, put it in a filepath on your nifi cluster and reference that path in the processor config. With that, you then configure the hdfs path (xmls hold all connection details) to put the file ... maybe append a unique timestamp or uuid to filename to distinguish repeated ingests of identically named files. https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi.processors.hadoop.PutHDFS/
... View more
10-28-2016
12:35 PM
@Magesh Kumar I believe this question is identical to another one you asked: https://community.hortonworks.com/questions/63947/incremental-flat-file-data-loading-into-hadoop.html#answer-63978
If there are differences, please elaborate.
... View more