Member since
08-11-2017
136
Posts
1
Kudos Received
0
Solutions
09-18-2017
07:57 PM
Hi @sally sally, You can do that by using invokeHTTP processor, once you make first service call then keep Response relationship to trigger next service. This way we can only triggers next service once we get response from previous service. Example:- In my below flow service 1 is triggered by GenerateFlowFile processor then i connected response relationship to trigger service2 InvokeHTTP processor. This service2 processor only triggers when it got response from service1 processor and keep in mind the response from service1 will be overwritten by response of service2.
... View more
09-15-2017
08:46 PM
2 Kudos
Hi @sally sally, List Hdfs processor are developed as store the last state.. i.e when you configure ListHDFS processor you are going to specify directory name in properties. once the processor lists all the files existed in that directory at the time it will stores the state as maximum file time when it got stored into HDFS. you can view the state info by clicking on view state button. if you want to clear the state then you need to get into view state and click on clear the state. 2. so once it saves the state in listhdfs processor, if you are running the processor by scheduling as cron(or)timer driven it will only checks for the new files after the state timestamp. Note:- as we are running ListHDFS on primary node only, but this state value will be stored across all the nodes of NiFi cluster as primary node got changed, there won't be any issues regarding duplicates. Example:- hadoop fs -ls /user/yashu/test/ Found 1 items
-rw-r--r-- 3 yash hdfs 3 2017-09-15 16:16 /user/yashu/test/part1.txt when i configure ListHDFS processor to list all the files in the above directory if you see the state of ListHDFS processor that should be same as when part1.txt got stored in HDFS in our case that should be 2017-09-15 16:16 it would be unix time in milliseconds when we convert the state time to date time format that should be Unixtime in milliseconds:- 1505506613479 Timestamp :- 2017-09-15 16:16:53 so the processor has stored the state, when it will run again it will lists only the new files that got stored after the state timestamp in to the directory and updates the state with new state time (i.e maximum file created in hadoop directory).
... View more
09-16-2017
07:13 AM
thank you , your answer was very helpful 😄
... View more
08-31-2017
12:39 PM
@sally sally The user who is logged in and building out dataflow, has no correlation to who that dataflow is running as. All the processors on the canvas are being executed by the user who owns the Nifi process itself. So when you setup a SSL Context Service to use a specific keystore and truststore, it is the PrivateKeyEntry in that keystore that will be used as the user for authentication and authorization during any established connection. The TrustedCertEntry(s) in the truststore provided in the SSL Context Service will be used to establish trust of the Server certificates passed by the endpoint (in your case the certs being passed from your NiFi nodes) during the two-way TLS handshake. Now this is a little different then when you log in via the browser to the UI. Two-way TLS is not enforced by your browser like it is by NiFi's processors. Your browser likely did not trust the cert presented by your NiFI nodes, and you added an exception the first time you connected saying that you would like to trust that unknown cert coming from the nifi node. Within NiFi and the SSL Context Service, there is no way to add such an exception. So trust must work in both directions. This means the truststore you use in your ssl Context Service must be able to trust the certificates being passed by each of your Nifi nodes. Thanks, Matt
... View more
08-25-2017
06:38 PM
Using your browser's Developer Tools window, use the UI to clear a queue while monitoring the network tab. Everything the Apache NiFi UI does is performed via the REST API. You will be able to see exactly what requests are sent to the server to clear the connection queue and can recreate that programmatically. The specific API endpoint you want in this case is POST /flowfile-queues/{id}/drop-requests where {id} is the connection ID.
... View more
08-24-2017
01:21 PM
This article will show you how to make customized logs in NiFi (from source logs) https://community.hortonworks.com/articles/65027/nifi-easy-custom-logging-of-diverse-sources-in-mer.html You will use nifi-app as your source log (logs application operations) and not nifi-user (logs user activity)
... View more
08-23-2017
07:45 PM
1 Kudo
From
my answer to this question on Stack Overflow:
To extract the desired value, use the XPath expression
//ErrorCode . This will return a String value -7. By selecting Destination flowfile-attribute, you can keep the flowfile content constant and put this new value in a flowfile attribute (i.e. named attribute ).
You can chain the matched relationship to an UpdateAttribute processor which has the expression ${attribute:toNumber()} to convert it to a numerical representation, i.e. ${attribute:toNumber():plus(10)} would return 3.
... View more
08-18-2017
12:04 PM
@sally sally If the answer helped, please accept it. Thanks
... View more
- « Previous
- Next »