Member since
07-08-2016
260
Posts
44
Kudos Received
10
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2474 | 05-02-2018 06:03 PM | |
5019 | 10-18-2017 04:02 PM | |
1613 | 08-25-2017 08:59 PM | |
2191 | 07-21-2017 08:13 PM | |
8817 | 04-06-2017 09:54 PM |
12-08-2016
02:18 AM
@Saikrishna Tarapareddy In that case, I'd use another command to list the sub folders first, then pass each sub folder to a zip command. NiFi flow looks like below. List Sub Folders (ExecuteProcess): I used find command here. find souce-files -type d -depth 1 The command produces a flow file containing sub folders each line. So, split those outputs, then use ExtractText to extract sub folder path to an attribute 'targetDir'. You need to add a dynamic property by clicking the plus sign, then name the property with an attribute name to extract the content to. The Value is a regular expression to extract desired text from the content. Used ExecuteStreamCommand this time, to use incoming flow files. - Command Path: zip - Command Arguments: -r;${targetDir}-${now():format('yyyyMMdd')}.zip;${targetDir} - Ignore STDIN: true Then it will zip each sub folders. Thanks, Koji
... View more
11-23-2016
04:52 PM
@Andy LoPresto i have generated all the .pems as you suggested and tried to test from openssl command line. It looks like it is able to do the hand shake , but showing a alert\warning towards the end..i am attaching the log from openssl
... View more
11-24-2016
12:30 AM
@Saikrishna Tarapareddy I hadn't read this comment when I wrote a reply few seconds ago. Glad to hear that it worked!
... View more
11-14-2016
09:49 PM
I believe the process you have is spot on and keeps the number of processors to a minimum. Matt
... View more
10-27-2016
09:44 PM
@mclark @Matt Burgess so in any case does the file needs to be read in to memory before it splits.?? either by lines or by bytes. i was hoping it starts the next process work once it receive first split.? in my case it waited 8 minutes until it split the 10GB file into 1200+ splits. If my files are about 100 GB each (I have 18 such files) I am scared to run the whole flow for all files. I may have to run for each file one by one.?
... View more
06-12-2019
10:55 AM
@mkalyanpur do I have to copy the krb5.conf file from my Hive server to the NiFi server?
... View more
10-20-2016
07:53 PM
it worked when i changed my ReplaceText to the format below. i think this has to do with the mime-type urlencoded. remember we cannot send special chars like @ i had to send it as %40 This is how the ReplaceText processor looks.. grant_type=password&client_id=6e880286&client_secret=d12f0f6d41cfe81fcfc122e3fc17a833&username=Saikrishna.Tarapareddy%40purina.nestle.com&password=7heStuhuwa also i had mime.type = application/x-www-form-urlencoded in my updateattribute processor. Thanks you all.
... View more
08-12-2019
03:23 AM
Nifi_AutoDeploymentScript/ is really helpful in workflow deployment. However looking for more details on 1. controller services 2. reading variables of source process group and deploy only those variables per environment 3. reading json attributes
... View more
10-12-2016
02:24 PM
1 Kudo
@Saikrishna Tarapareddy The FlowFile repo will never get close to 1.2 TB in size. That is a lot of wasted money on hardware. You should inquire with your vendor about having them split that Raid in to multiple logical volumes, so you can allocate a large portion of it to other things. Logical Volumes is also a safe way to protect your RAID1 where you OS lives. If some error condition should occur that results in a lot of logging, the application logs may eat up all your disk space affecting you OS. With logical volumes you can protect your root disk. If not possible, I would recommend changing you setup to a a bunch of RAID1 setups. With 16 x 600 GB hard drives you have allocated above, you could create 8 RAID1 disk arrays. - 1 for root + software install + database repo + logs (need to make sure you have some monitioring setup to monitor disk usage on this RAID if logical volumes can not be supported) - 1 for flowfile repo - 3 for content repo - 3 for provenance repo Thanks, Matt
... View more
- « Previous
- Next »