Member since
07-30-2019
3392
Posts
1618
Kudos Received
1001
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 415 | 11-05-2025 11:01 AM | |
| 291 | 11-05-2025 08:01 AM | |
| 435 | 11-04-2025 10:16 AM | |
| 660 | 10-20-2025 06:29 AM | |
| 800 | 10-10-2025 08:03 AM |
10-14-2016
02:41 PM
@mclark That certainly worked! Thank you so much. One additional question related to this, is there any other way for me to access the flowfile-content within the XSLT file which i would use through Transform XML?
... View more
04-03-2019
04:41 PM
1 Kudo
@Matt Clarke , @boyer : I am also facing similar issue. In my case workflow is something like HandleHttpRequest -> InvokeHTTP -> output port -> goes to custom processor in different process group which has only one property setup -> input port -> HandleHttpResponse, I configured same StandardHttpContextMap in both HandleHttpRequest and HandleHttpResponse processors; how should I deal with this error with http.context.identifier in HandleHttpResponse processor? Any help is appreciated...
... View more
01-19-2017
09:26 PM
step1 : remove the authorization.xml from all the nodes rm /var/lib/nifi/conf/authorizations.xml step2 : update node identities in advanced_nifi_ambari_ssl_configuration section with <property name="Node Identity 1">CN=hostname1, OU=XXXXX</property> ... for all the nodes (make sure you remove comment tags in XML). step3 : restart the NiFi service found that authorizations.xml is not being update after first generation, hence this is causing the problem.
... View more
10-12-2016
01:51 PM
Hi @mclark, Thanks for the reply and for time you spend on zoom. It was confusing that when i make a change in the component it gets added to the Parent group if i don't override, which was some kind of reverse inheritance. The message said only below which made me think there is something wrong: "Showing effective policy inherited from Process Group Group1.
Override this policy." Thanks Again for clarifying !!
... View more
03-02-2017
03:38 PM
Hi i have 5 separate queues for 5 different processors, everytime i'm going to each processor and clearing the each queue its taking me lot of time, is there any way to clear all the queue's at same time ? please help me with this thanks Ravi
... View more
09-21-2016
05:29 PM
1 Kudo
There is the possibility that the time could differ slightly (ms) between when both now() functions are called in that expression language which could cause the result to push pack to 11:58:59. To avoid this you can simply reduce 43260000 by a few milliseconds (43259990) to ensure that does not happen so 11:59:00 is always returned.
... View more
09-21-2016
01:24 PM
OMG, stupid me 😄 Thanks @mclark , exactly that solved the issue, sorry for bothering
... View more
09-19-2016
08:28 PM
1 Kudo
@mclark Wow. Thanks. This may be the direction we're looking for. Thank you. This will certainly help. I feel some additional kafka questions coming along however: particularly on the topic of linking ConsumeKafka with GetKafka and it's properties, but this is definitely a big leap in where we want to be.
... View more
10-12-2016
02:24 PM
1 Kudo
@Saikrishna Tarapareddy The FlowFile repo will never get close to 1.2 TB in size. That is a lot of wasted money on hardware. You should inquire with your vendor about having them split that Raid in to multiple logical volumes, so you can allocate a large portion of it to other things. Logical Volumes is also a safe way to protect your RAID1 where you OS lives. If some error condition should occur that results in a lot of logging, the application logs may eat up all your disk space affecting you OS. With logical volumes you can protect your root disk. If not possible, I would recommend changing you setup to a a bunch of RAID1 setups. With 16 x 600 GB hard drives you have allocated above, you could create 8 RAID1 disk arrays. - 1 for root + software install + database repo + logs (need to make sure you have some monitioring setup to monitor disk usage on this RAID if logical volumes can not be supported) - 1 for flowfile repo - 3 for content repo - 3 for provenance repo Thanks, Matt
... View more