Member since
04-29-2016
192
Posts
20
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1641 | 07-14-2017 05:01 PM | |
2783 | 06-28-2017 05:20 PM |
05-28-2024
03:18 AM
Could someone please help me with this ? Fetch Provenance data using SiteToSiteProvenanceRe... - Cloudera Community - 388418 configuration site to site is not working in http when nifi is running on https
... View more
10-04-2021
09:39 PM
Hello @RyanCicak Im trying. this flow but it doesn't work for me. This is my flow What should I do? thanks
... View more
09-18-2020
07:03 AM
Here u go Open bash script and put the info url="https://<you environment>:9091/nifi-api/" url_encoded="application/x-www-form-urlencoded" accept_header="Accept: application/json" content_type="Content-Type: application/json" data="username=<your user name>&password=<your password" rm -rf /tmp/a end_point="access/token" curl -k -X POST "${url}${end_point}" -H "${url_encoded}" --data "${data}" > /tmp/a token=`cat /tmp/a` bearer="Authorization: Bearer ${token}" # Now you are all set to run the curl commands # Get root id end_point="process-groups/root" curl -k -X GET "${url}${end_point}" -H "${content_type}" -H "${bearer}" # Get all the components under root which includes all processgroups, process, controller service, connections and everything. # NOTE: the identifier id WILL NOT MATCH your NIFI id. They are different. but this is one stop for full automation end_point="process-groups/root/download" curl -k -X GET "${url}${end_point}" -H "${content_type}" -H "${bearer}"
... View more
03-31-2020
07:16 AM
@pvillard How does this work exactly? Im having issues segmenting large files as well. When i split them do i do it multiple times or just once and then I can recombine them successively. Thanks for you help!
... View more
09-25-2019
10:23 AM
The question posted is not a hypothetical one, it is a real use case. fyi, here is another thread related to partial file consumption; - https://stackoverflow.com/questions/45379729/nifi-how-to-avoid-copying-file-that-are-partially-written that thread does not suggest the OS automatically takes care of this. The solution proposed there is to add a time wait between ListFile and FetchFile, but in our case, the requirement is to wait for an indicator file before we start file ingestion;
... View more
09-04-2019
05:02 AM
writes attribute of some processors (SplitRecord, ReplaceText ) does not contains the error it writes during execution. How and where to identify the error in that case?
... View more
08-20-2019
11:29 AM
As a general best practice, I suggest sending those metrics to an all-together separate monitoring system (something like InfluxDB). You can’t effectively monitor a thingy with the same thing. If that thingy fails… you risk losing visibility. #JustSayin
... View more
03-14-2018
02:10 AM
@Pranay Vyas The Hive Export/Import worked well for us. Thanks.
... View more
10-20-2017
02:54 PM
@Andrew Lim thanks for clarifying further.
... View more
05-15-2018
03:59 PM
1 Kudo
Unfortunately "--hive-overwrite" option destroy hive table structure and re-create it after that which is not acceptable way. The only way is: 1. hive> truncate table sample; 2. sqoop import --connect jdbc:mysql://yourhost/test --username test --password test01 --table sample --hcatalog-table sample
... View more