Member since
04-29-2016
192
Posts
20
Kudos Received
2
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 2325 | 07-14-2017 05:01 PM | |
| 4092 | 06-28-2017 05:20 PM |
11-20-2025
11:06 PM
Hi everyone, I hope you’re doing well. I am working on a dataflow in Apache NiFi 1.18, and I need to retrieve the queue size information (flowfile count and content size) directly within NiFi itself, not via an external script or Postman. I know that the NiFi REST API provides this data, and I can access it successfully using external tools. However, my goal is to access queue metrics from inside NiFi, for example through processors like InvokeScriptedProcessor, QueryNiFiReportingTask, or any other built-in mechanism, without sending an external REST API request from outside NiFi. Is there a recommended approach, processor, or reporting task that allows NiFi to read its own queue sizes internally? If not, what would be the best practice to achieve this? Any guidance or examples would be greatly appreciated. Thank you in advance!
... View more
05-28-2024
03:18 AM
Could someone please help me with this ? Fetch Provenance data using SiteToSiteProvenanceRe... - Cloudera Community - 388418 configuration site to site is not working in http when nifi is running on https
... View more
10-04-2021
09:39 PM
Hello @RyanCicak Im trying. this flow but it doesn't work for me. This is my flow What should I do? thanks
... View more
09-18-2020
07:03 AM
Here u go Open bash script and put the info url="https://<you environment>:9091/nifi-api/" url_encoded="application/x-www-form-urlencoded" accept_header="Accept: application/json" content_type="Content-Type: application/json" data="username=<your user name>&password=<your password" rm -rf /tmp/a end_point="access/token" curl -k -X POST "${url}${end_point}" -H "${url_encoded}" --data "${data}" > /tmp/a token=`cat /tmp/a` bearer="Authorization: Bearer ${token}" # Now you are all set to run the curl commands # Get root id end_point="process-groups/root" curl -k -X GET "${url}${end_point}" -H "${content_type}" -H "${bearer}" # Get all the components under root which includes all processgroups, process, controller service, connections and everything. # NOTE: the identifier id WILL NOT MATCH your NIFI id. They are different. but this is one stop for full automation end_point="process-groups/root/download" curl -k -X GET "${url}${end_point}" -H "${content_type}" -H "${bearer}"
... View more
03-31-2020
07:16 AM
@pvillard How does this work exactly? Im having issues segmenting large files as well. When i split them do i do it multiple times or just once and then I can recombine them successively. Thanks for you help!
... View more
09-25-2019
10:23 AM
The question posted is not a hypothetical one, it is a real use case. fyi, here is another thread related to partial file consumption; - https://stackoverflow.com/questions/45379729/nifi-how-to-avoid-copying-file-that-are-partially-written that thread does not suggest the OS automatically takes care of this. The solution proposed there is to add a time wait between ListFile and FetchFile, but in our case, the requirement is to wait for an indicator file before we start file ingestion;
... View more
09-04-2019
05:02 AM
writes attribute of some processors (SplitRecord, ReplaceText ) does not contains the error it writes during execution. How and where to identify the error in that case?
... View more
03-14-2018
02:10 AM
@Pranay Vyas The Hive Export/Import worked well for us. Thanks.
... View more
10-20-2017
02:54 PM
@Andrew Lim thanks for clarifying further.
... View more
05-15-2018
03:59 PM
1 Kudo
Unfortunately "--hive-overwrite" option destroy hive table structure and re-create it after that which is not acceptable way. The only way is: 1. hive> truncate table sample; 2. sqoop import --connect jdbc:mysql://yourhost/test --username test --password test01 --table sample --hcatalog-table sample
... View more