Member since
07-30-2019
3090
Posts
1543
Kudos Received
899
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
94 | 10-31-2024 06:33 AM | |
173 | 10-31-2024 06:07 AM | |
190 | 10-23-2024 09:50 AM | |
201 | 10-23-2024 06:40 AM | |
320 | 10-23-2024 06:33 AM |
11-14-2016
09:49 PM
I believe the process you have is spot on and keeps the number of processors to a minimum. Matt
... View more
11-14-2016
09:47 PM
1 Kudo
@ambud.sharma Each Node in a NiFi cluster runs its own copy of the dataflow and works on its own set of FlowFiles. Node A for example is unaware of the existence of Node B. NiFi does persist all FlowFiles (attributes and content) in to local repositories on each node in the cluster. That is why is is important to make these repo fault tolerant (For example using RAID 10 Disk for your repos). Should a node go down, as long as you have access to those repos and copy of the flow.xml.gz, you can recover your dataflow where it left off, even if that means spinning up a new NiFi and pointing it at those existing repos. NiFi comes with no automated built in process for this. While Nodes at this current time are not aware of other nodes or the data the currently have queued, This is a roadmap item for a future version of NiFi. At this time the HA Data plane stuff has not been committed to any particular release to the best of my knowledge. Thanks, Matt
... View more
11-14-2016
09:38 PM
1 Kudo
@Saikrishna Tarapareddy
S2S does not use LDAP for server authentication. S2S uses the keystore and truststore provided in the nifi.properties file to establish a secured mutual authenticated connection between two secured NiFi instances/clusters. The destination NiFi dictate whether the S2S connection will be secure or not. If you have secured your destination NiFi, then only a source NiFi (one with the RPG) that has been configured with its own server keystore and truststore will be able to connect. Since S2S relies on certificates for mutual authentication. The user authentication you choose to use can be different on each NiFi installation. LDAP on one, certs on another, etc... Thanks, Matt
... View more
11-14-2016
08:06 PM
Also recommend against putting the quotes around your folder names ('MS1' should be just MS1).
... View more
11-14-2016
08:04 PM
1 Kudo
@Saikrishna Tarapareddy Sounds like your Conditional EL statements are not resulting in a boolean true in your UpdateAttribute processor.
After some FlowFiles get routed through the UpdateAttribute let them queue on the outbound connection (Stop the next processor). Right click on the connection and select "List queue". Click on the "view details" icon to the far left of a FlowFile and look at the Attributes on that FlowFile. Do you see the expected "Folder" attribute? is it set to the correct value? If it does not exist, does the filename match exactly one of the provided strings in your EL condition statements? Thanks, Matt
... View more
11-14-2016
01:09 PM
1 Kudo
@Lucas Alvarez The SplitJSON processor splits an incoming JSON on to multiple output JSON messages. You should use the EvaluateJSONPath processor to extract the URL from your splits and ssign them to a FlowFIle attribute you acn then use in your InvokeHTTP processor. Thanks, Matt
... View more
11-11-2016
05:06 PM
1 Kudo
@Raj B The toDate NiFi expression Language function expected the input to this function to define the current format of the value it is being passed. The result is the number of milliseconds since Jan. 1st 1970. The Format function will take a standard date of format number milliseconds since Jan 1st 1970 and convert it into the desired output format as defined in teh function. Assuming you have an attribute Abc.DateTimeOfMessage with a value of 20161011075959, teh following NiFi EL statement will produce the output '2016/10/11 07:59:59': ${Abc.DateTimeOfMessage:toDate('yyyyMMddHHmmss'):format('yyyy/MM/dd HH:mm:ss')} The above EL statement firs convert the dat you have into the standard date format (milliseconds since 1/1/1970) using the toDate function and then pass that result to the format function which converts a standard dat format into the desired output string you are looking for. *** An alternative EL that will yield the sam result is: ${Abc.DateTimeOfMessage:replaceAll('^([0-9]{4})([0-9]{2})([0-9]{2})([0-9]{2})([0-9]{2})([0-9]{2})','$1/$2/$3 $4:$5:$6')} The above uses the EL replaceAll function uses java capture groups to break apart the incoming function and then uses the values of those 6 capture groups to reconstruct the output in the format you want. There are even more ways, but I figured this is good enough. Thanks, Matt
... View more
11-10-2016
01:21 PM
@vlundberg This has nothing to do with being installed via Ambari. If the core-site.xml file that is being used by the HDFS processor in NiFi reference a Class which NiFi does not include, you will get a NoClassDef found error. Adding new Class to NiFi's HDFS NAR bundle may be a possibility, but as I am not a developer i can't speak to that. You can always file an Apache Jira against NiFi for this change. https://issues.apache.org/jira/secure/Dashboard.jspa Thanks, Matt
... View more
11-10-2016
01:01 PM
4 Kudos
@kumar Check out this template as it will do exactly what you are looking for: Retry_Count_Loop.xml Just feed your failure relationship in to this process group and the output from this process group back to your processor. Thanks, Matt
... View more
11-09-2016
04:40 PM
1 Kudo
@Obaid Salikeen Unfortunately the answer is no at this time. NiFi has zookeeper as a dependency in HDF Ambari, so it is installed when the NiFi service is selected. Once NiFi is deployed there is nothing stopping you from updating the NiFi configs via Ambari to point at your existing Zookeeper you already have installed elsewhere. Keep in mind that other service within the HDF Ambari stack also rely on Zookeeper, so you may need to reconfigure them as well. Matt
... View more