Member since
02-01-2022
281
Posts
103
Kudos Received
60
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1120 | 05-15-2025 05:45 AM | |
| 4951 | 06-12-2024 06:43 AM | |
| 7923 | 04-12-2024 06:05 AM | |
| 5837 | 12-07-2023 04:50 AM | |
| 3205 | 12-05-2023 06:22 AM |
05-31-2023
07:29 AM
Cloudera making hard stuff easy again! Great article Ryan!!
... View more
05-31-2023
07:26 AM
@jnifi It sounds like you mentioned missing some headers. If you know what they are, you click the + on nifi invokeHttp and add them. If you get into debug, and the log files you should be able to see the request and response objects to confirm its the correct format. I think that will get you through the 401.
... View more
05-31-2023
05:35 AM
2 Kudos
@Fredi The solution you are looking for, is to use this line of code in your script during a loop whenever you want to send the current flowfile content. session.commit() This will send a flowfile out. This is little known, as this command is inferred at the end of the script, assuming 1 execution to one flowfile. Here is an example from my fraud detection demo, a while statement that does some counts, sends multiple flowfiles of good transactions, then in random iterations during 20 loops, sends some fraud transaction flowfiles. # All processing code starts at this indent
while ticks < 20:
ticks += 1
fintran = create_fintran()
fintransaction = json.dumps(fintran)
#send_fintran(out_socket, json.dumps(fintran))
#print(fintransaction)
flowFile = session.create()
flowFile = session.write(flowFile, WriteContentCallback(fintransaction))
session.transfer(flowFile, REL_SUCCESS)
session.commit()
sleep(DELAY)
if ticks > fraud_tick:
fraudtran = create_fraudtran(fintran)
fraudfintransaction=json.dumps(fraudtran)
#send_fintran(out_socket, json.dumps(fraudtran))
#print(fraudfintransaction)
flowFile2 = session.create()
flowFile2 = session.write(flowFile2, WriteContentCallback(fraudfintransaction))
session.transfer(flowFile2, REL_SUCCESS)
session.commit()
fraud_tick = random.randint(FRAUD_TICK_MIN, FRAUD_TICK_MAX)
... View more
05-31-2023
04:38 AM
Pull Request is in!! https://github.com/apache/nifi/pull/7316 This will be fixed in future releases of NiFi. @jisaitua send me a DM with your email if you need to use the new processor right away.
... View more
05-30-2023
11:42 AM
1 Kudo
@jisaitua I am not sure how long its going to take, but i should have a github commit ready w/ a new PutBigQuery nar that you can use with 1.21. I will update again tomorrow morning.
... View more
05-30-2023
11:05 AM
Some progress: resource 'projects/gcp-se-cdp-sandbox-env/datasets/dataset/tables/tablename' With datasets = ${bq.dataset} tables = ${bq.table.name}
... View more
05-30-2023
05:58 AM
1 Kudo
@jisaitua Thank you for making such a well written post with all of the right screen shots, etc. I am working on duplicating this issue and so far i have confirmed the same experience. INVALID_ARGUMENT: Invalid project resource name projects/${test}; Project id: ${test} So this is a known JIRA bug: https://issues.apache.org/jira/projects/NIFI/issues/NIFI-11608?filter=allissues I will work on getting some traction here!!
... View more
05-30-2023
05:31 AM
@jnifi If you are able to get a remote API call working with Postman, you can definitely get that to work with InvokeHttp. Not having an oAuth2 to test with myself, here are some suggestions: Fully document required settings for postman headers, request, post values, etc duplicate them in NiFi set InvokeHttp log level to DEBUG tail logs/nifi-app.log while testing When you have that operational understanding and full visibility of errors begin iterating on NiFi where you start to test all of the settings. Getting nifi/invokeHttp outcome to match postman is much easier in this manner. Additionally in future replies, show screen shots of processor configs or share flow definition/template files where possible.
... View more
05-30-2023
05:25 AM
@banshidhar_saho I am assuming you are not using @EXAMPLE.COM. Have you confirmed that your client (mac os) has network and dns connectivity with the KDC Host? There's a few things you must do to configure it properly: Ensure the Kerberos client libraries are installed on that host Your on-prem krb5.conf file must be copied to the client host. The sections [realms] and [domain_realm] are especially important to solve your issue. Ensure that the hostname of your KDC can be resolved from the client (you can test it with nslookup and/or ping). This must work correctly for Kerberos to work. If there's no integrated DNS you will have to add entries to your /etc/hosts file to ensure the resolution is correct. Ensure that any firewalls are configured correctly to open ports between your application and your on-prem environment: Open all the ports required for the client to communicate with the KDC (typically, ports 88 UDP and 88 TCP)
... View more
05-19-2023
04:05 AM
@soc88 Some suggestions so community can better help you: Show screen shots of processor's configuration tab. We need to see the properties and how you have setup the processor. For processors with errors (red boxes), you can click the box to see full error. We need to see the errors to suggest solutions. Look in nifi-app.log for these errors if you need more verbose errors than shown in the UI. You can also set the processor log level to see more in the UI. For example, set it to DEBUG and test again. I would suspect your error could be the configuration of the processor, but would more suspect opensearch permissions on receiving end and less about the version of NIFI.
... View more