Member since
01-11-2016
355
Posts
230
Kudos Received
74
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
8291 | 06-19-2018 08:52 AM | |
3213 | 06-13-2018 07:54 AM | |
3666 | 06-02-2018 06:27 PM | |
3963 | 05-01-2018 12:28 PM | |
5505 | 04-24-2018 11:38 AM |
10-18-2017
08:58 AM
Hi @Ben Morris I understand the requirement, I have the same needs for few use cases. Unfortunately, there's no ETA for this feature yet. This is something the community is aware of. Getting this done depends on priorities as well as the complexity of this feature. Regarding migration, data queued in a node can be used again if the node is brought back again. If this is not possible, you can spin up a new node and configure it to use the existing repos from the old node (they are not specific to a NiFi node). IMO this migration process will depend on your infrastructure. If you are on baremetal node with RAID local storage, this will take time as you need to bring back a new physical node with the old disks (if node recovery is not possible). If you are on virtual infrastructure, the task will be easier since you can create new VM, install NiFi and make it use the existing repos. Here also, time and complexity will depend on your storage type (local or network). Working on HA/fault tolerance with realtime is not an easy task. You have lot of things to consider around data duplication. I am thinking out loud here but If you can afford at-least-once strategy, you can may be design your flow to achieve it (using state backend). There's no easy standard solution though. This will depend you data source, your ability to deduplicate data and so on. This is something I am working currently.
... View more
10-18-2017
06:08 AM
1 Kudo
Hi @Kiem Nguyen What will be the rest of the flow after routing? can you give more details?
... View more
10-17-2017
08:58 PM
@Alvin Jin Since you are on a secure cluster, you need to add policies to authorize the cluster to use S2S. You need to create a policies to authorize your nodes to retrieve S2S details and to receive data via S2S. Look at option 2 of this article to see how to do it https://community.hortonworks.com/content/kbentry/88473/site-to-site-communication-between-secured-https-a.html Read this article on List/fetch design pattern to have a better understanding of what you are implementing https://pierrevillard.com/2017/02/23/listfetch-pattern-and-remote-process-group-in-apache-nifi/
... View more
10-17-2017
03:46 PM
Hi @Ben Morris This feature is still on the roadmap and it's not available yet: https://cwiki.apache.org/confluence/display/NIFI/High+Availability+Processing What are you trying to achieve? does RAID disk an acceptable solution for you?
... View more
10-16-2017
01:57 PM
@xav webmaster You can use one UpdateAttribute with several rules. if x then topic = 'A', if y then topic = 'B' etc You can add rules and define an action for each rule which set the correct topic value
... View more
10-16-2017
01:43 PM
@xav webmaster to have an idea on how this works, look at this answer https://community.hortonworks.com/questions/140060/nifi-how-to-load-a-value-in-memory-one-time-from-c.html It's not the same subject but it can give you an idea on how you can use this feature
... View more
10-16-2017
01:27 PM
Hi @xav webmaster Have you looked to rules feature in the UpdateAttribute processor ? it's available in the advanced configuration section : https://nifi.apache.org/docs/nifi-docs/components/org.apache.nifi/nifi-update-attribute-nar/1.4.0/org.apache.nifi.processors.attributes.UpdateAttribute/additionalDetails.html You can extract information that you want and add them as an attribute (using extract processors). Once you have information that you want as an attribute, you can use UpdateAttribute with rules to add/update a new attribute 'topic' and set the value following you conditions. Is this helpful ?
... View more
10-16-2017
12:34 PM
1 Kudo
@msumbul I don't know of any standard error attribute in NiFi. This depends on the processor and usually is not provided. You can always use UpdateAttribute to add an error type following your needs. If a processor has three possible errors relations, you can have 3 update processors which add an attribute error and populate it with the type of error.
... View more
10-15-2017
05:41 PM
@suresh krish What are you trying to use? Keystore on HDFS or in local ? You can read in the doc the following : The JavaKeyStoreProvider, which is represented by the provider URI jceks://file|hdfs/path-to-keystore, is used to retrieve credentials from a Java keystore. The underlying use of the Hadoop filesystem abstraction allows credentials to be stored on the local filesystem or within HDFS. and The LocalJavaKeyStoreProvider, which is represented by the provider URI localjceks://file/path-to-keystore, is used to access credentials from a Java keystore that is must be stored on the local filesystem. You are using localjceks. So your URI should be localjceks://file/path-to-your-jceks. The file keyword is important. Also, the /user/hdfs in this case is a local so it should exist in your OS. If you want to use HDFS then you need jceks and URI jceks://hdfs/path-to-your-file
... View more
10-15-2017
08:54 AM
@Yair Ogen Have you tried the Hive-JDBC available on Hortonworks Add-ons download page ? https://hortonworks.com/downloads/#addons
... View more