Member since
05-18-2017
10
Posts
1
Kudos Received
0
Solutions
10-03-2017
07:28 PM
Hello @Andy LoPresto, Thank you very much for the detailed answer. 1. Got it. I will try the Decrypt part and verify that. 2. I understand that, it is not the recommended practice, but yes, this is what I was looking for. I will re-evaluate and raise JIRA for requesting a dynamic property for this. 3. What I meant was, the Key is being derived/decrypted using Standard PBE and that Key is used for Encryption/Decryption. 4. Yes, I am able to use the EncryptContent successfully. It is just that, since the data is shared, I had to make sure I replicate the same logic across. Thanks & Regards, Prakash
... View more
06-15-2017
01:07 PM
@Prakash Ravi Nodes in a NiFi cluster have no idea about the existence of other nodes in the cluster. Nodes simply send heath and status heartbeat messages to the currently elected cluster coordinator. As such, each node runs its own copy of the flow.xml.gz file and works on its own set of FlowFiles. So if you have 9 NiFi nodes, each node will be running its own copy of the consumeKafka processor. With 1 concurrent task set on the processor, each node will establish one consumer connection to the Kafka topic. So you would have 9 consumers for 10 partitions. So in order to consume from all partitions you will need to configure 2 concurrent tasks. This will give you 18 consumers for 10 partitions. Kafka will assign a partition connections within this pool of 18 consumers. Ideally you would see 1 consumer on 8 of your nodes and 2 on one. The data to your niFi cluster will not be evenly balanced because of the in-balance in number of consumers versus partitions. As far as your Kafka Broker rebalance goes.... Kafka will trigger a rebalance if a consumer disconnects and another consumer connects. Things that can cause a consumer to disconnect include: 1. Shutting down one or more of your NiFi nodes. 2. Connection timeout between a consumer and a Kafka broker. - Triggered by network issues between a NiFi node and Kafka broker - Triggered by scheduling Consume Kafka run schedule for longer then configured timeout. for example a 60 second run schedule and 30 second timeout. - Triggered by backpressure being applied on the connection leading off the consumeKafka causing ConsumeKafka to not run until backpressure is gone. *** This trigger was fixed in NiFi 1.2, but i don't knwo what version you are running. I you feel I have addressed your original question, please mark this answer as accepted ( ) to close out this thread. Thank you, Matt
... View more
05-19-2017
06:14 PM
@Prakash Ravi There are only three processors which require a Distributed map Cache server: DetectDuplicate, FetchDistributedMapCache and PutDistributedMapCache. The rest will use zookeeper were applicable. To clear the state of a processor, just do the following steps Right click on a processor, select View state from the menu Then just click Clear state and the files will be listed again.
... View more