Member since
04-26-2018
10
Posts
0
Kudos Received
0
Solutions
11-20-2019
04:11 PM
How to do the file count if we are using Compress Content Processor ?
... View more
06-25-2019
05:37 PM
I wrote a blog about this setup (https://community.hortonworks.com/content/kbentry/210286/storing-apache-nifi-versioned-flows-in-a-git-repos.html) and it seems like you did all the configuration steps I included. However, I was not using Git VSTS.
... View more
05-18-2018
12:43 PM
@Shu Sorry for the late response .The flow worked fine.As mentioned the unpack processor recursively extract all the files.When I make sample zip file it works fine.But unfortunately the ZIP file I have to work with has a descriptor.This cause the processor to fail and gives an unsupported feature exception.As workaround I am planning to write a script and call it via execute script processor.Will keep you updated on my progress
... View more
05-07-2018
10:57 AM
1 Kudo
@vinayak krishnan
Yes it's possible but we need to prepare unique values(ex: uuid is unique for each flowfile) for the Cache Entry Identifiers. Also based on cache update strategy property determines when the already cache entry identifier appears again Cache update strategy replace Replace if present Keep original Determines how the cache is updated if the cache already contains the entry Property value set to replace then the new value will be replaced to already existing for the cache entry identifier. Property value set to Keep original then we Adds the specified entry to cache only if the key doesn't exist. For your reference to this link how to set up unique values for Cache Entry Identifiers. in the above link i have used update attribute processor to change the filename to UUID and then used ${filename} as Cache Entry Identifier, Like that way you can prepare your attribute (or) use ${UUID()} as the identifier.Then in Fetch Distribute Cache we need to have same key to fetch the cached value for the same identifier.
... View more