Member since
11-14-2019
51
Posts
6
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
1177 | 05-26-2023 08:40 AM | |
1297 | 05-22-2023 02:38 AM | |
1198 | 05-15-2023 11:25 AM |
07-04-2024
12:42 PM
1 Kudo
Even I am also facing the same issue. If you have identified the reason please help me how you solve it.
... View more
01-26-2024
02:09 AM
2 Kudos
Hi @SandyClouds , I ran into this issue before and after some research I found that when you do the ConvertJsonToSQL nifi assigns timestamp data type (value = 93 in the sql.args.[n].type attribute ). When the PutSQL runs the generated sql statement it will parse the value according to the assigned type and format it accordingly. However for timestamp it expects it to be in the format of "yyyy-MM-dd HH:mm:ss.SSS" so if you are missing the milliseconds in the original datetime value it will fail with the specified error message. To resolve the issue make sure to assign 000 milliseconds to your datetime value before running the PUTSQL processor. You can do that in the source Json itself before the conversion to SQL or after conversion to SQL using UpdateAttribute, by using the later option you have to know which sql.args.[n].value will have the datetime and do expression language to reformat. If that helps please accept solution. Thanks
... View more
09-19-2023
07:00 AM
1 Kudo
@SandyClouds The "Jolt Specification" property in the JoltTransformJson processor supports NiFi Expression Language (will be evaluated using flow file attributes and variable registry). This means that any attribute present on the FlowFile passed to the JoltTransform processor can be used in the spec. So if you have an upstream processor adding an attribute to your FlowFile with the attribute name "customer_id", you can simply use NiFi Expression Language in your spec to pull in that attributes value. NiFi Parameters are defined using "#{<parameter name>}" which will look up that parameter in the parameter context defined on the NiFi Process Group and return its value. NiFi Expression language is defined by starting with "${" and ending with "}". Between that you can bring in an attributes value, apply NiFi Expression Language functions against an attributes value, etc. Simplest NiFi Expression Language statement would be "${customer_id}", which will look for an attribute in the FlowFile being processed and return its value. We refer to "customer_id" as the "subject" in that NiFi EL. NiFi EL does have a bunch of subjectless functions like "now()" that you are using, but more typical use cases start with a subject (FlowFile attribute). I strongly encourage you to read through the linked NiFi EL guide to help you better understand the complexity and many ways you can used NiFi EL statements. If you found any of the suggestions/solutions provided helped you with your issue, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
07-07-2023
06:31 AM
1 Kudo
@SandyClouds What i am saying is that rest-api call response you got via command line is probably not the exact same request made by NiFi. What that response json tells you is that the user that made the request only has READ to all three buckets. You did not share the complete rest-api call you made, so I am guessing you did not pass any user authentication token or clientAuth certificate in yoru rest-api request and this that request was treated as bing made by an anonymous user. Since your bucket(s) have the "make publicly visible" checked, anonymous user will have only READ access to the buckets. In NiFi you are trying to start version control on a Process group. That means that the NiFi node certificate needs to be authorized on all buckets and authorized to proxy the NiFi authenticated user (who also needs to have read, write on the specific bucket to start version control. If the response received back to NiFi during the request is that nodes or user on has READ, then no buckets would be shown in that UI since the action that UI is performing would be a WRITE. You should use the "developer tools" in your web browser to capture the actual full request being made by NiFi when you try to start version control on a process group. Most developer tolls even give you and option to "copy as curl" which you can then just paste in a cmd window and execute manually yourself outside of NiFi to see the same response NiFi is getting. My guess here is that you are having a TLS handshake issue resulting in yoru requests to NiFi-Registry being anonymous requests. The NiFi logs are not going to show you details on what authorization was on the NiFi-Registry side (you'll need to check the nifi-registry-app.log for that). All the NiFi log you shared shows is that your "new_admin" was authorized to execute the NiFi rest-api endpoint. This triggering the connection attempt with NiFi-Registry which also appeared successful (likely as anonymous). If you look at the verbose output for the NiFi keystore, NiFi truststore, NiFi-Registry keystore, and NiFi-Registry truststore, you'll be able to determine if a mutual TLS exchange would even be possible between the two services using the keystore and truststores you have in place. These versbose outputs which you can get using the keytool command will also show you the EKUs and SANs present on your certificates. <path to JDK>/keytool -v -list -keystore <keystore or truststore file> When prompted for password, you can just hit enter without providing password. The screenshots you shared showing the authorizations configured in NiFi-Registry are sufficient. I was simply pointing out that you gave too much authorization to your NiFi node in the global policy section. This is why i am leaning more to a TLS handshake issue resulting in an anonymous user authentication rather then an authorization issue. You'll also want to inspect your nifi-registry-app.log for the authorization request coming from NiFi to see if is authorizing "anonymous" or some other string. Perhaps you have some identity mapping pattern configured on NiFi-Registry side that is trimming out the CN from your certificate DNs? In that case you could also see this behavior because your NiFi-Registry would be looking up authorization on different identity string (instead of "CN=new_admin, OU=Nifi, i maybe looking for just "new_admin"). NiFi and NiFi-Registry authorizations are exact string matches (case sensitive); otherwise, treated as different users. If you found that the provided solution(s) assisted you with your query, please take a moment to login and click Accept as Solution below each response that helped. Thank you, Matt
... View more
07-04-2023
06:53 AM
Thank you @MattWho . Sorry to comment late on this. Just to add more details to the answer and how it worked for me: in my docker compose, earlier i used below environment variable: - INITIAL_ADMIN_IDENTITY='CN=admin, OU=NiFi' but somehow it is not interpreted properly.. and giving extra quotes as it is inside containers.. so even though the below looks weird, it worked. - INITIAL_ADMIN_IDENTITY=CN=admin, OU=NiFi (observe i removed single quotes around the value after 😃
... View more
06-27-2023
07:17 AM
@alim Can you please suggest..
... View more
06-15-2023
02:22 AM
1 Kudo
Thanks @MattWho I created https://issues.apache.org/jira/browse/NIFI-11695 Hoping for the best!
... View more
06-08-2023
02:11 AM
1 Kudo
@MattWho Thanks much for clearing my confusions around on - how my jobs running automatically when i restart server.. I encountered restart couple of times but never know this feature. You are awesome!!
... View more
06-07-2023
12:24 PM
2 Kudos
@SandyClouds Some clarity and additions to @cotopaul Pros and Cons: Single Node: PROs: - easy to manage. <-- Setup and managing configuration is easier since you only need to do that on one node. But in a cluster, all nodes configuration files will be almost the same (some variations in hostname properties and certificates if you secure your cluster). - easy to configure. <-- There are more configurations needed in a cluster setup, but once setup, nothing changes from the user experience when it comes to interacting with the UI. - no https required. <-- Not sure how this is a PRO. I would not recommend using an un-secure NiFi as doing so allow anyone access to your dataflows and the data being processed. You can also have an un-secure NiFi cluster while i do not recommend that either. CONs: - in case of issues with the node, you NiFi instance is down. <-- Very true, single point of failure. - it uses plenty of resources, when it needs to process data, as everything is done on a single node. Cluster: PROs: - redundancy and failover --> when a node goes down, the others will take over and process everything, meaning that you will not get affected. <-- Not complete accurate. Each node in a NiFi cluster is only aware of the data (FlowFiles) queued on that specific node. So each node works on the FlowFile present on that one node, so it is the responsibility of the dataflow designer/builder to make sure they built their dataflows in such away to ensure distribution of FlowFiles across all nodes. When a node goes down, any data FlowFiles currently queued on that down node are not going to be processed by the other nodes. However, other nodes will continue processing their data and all new data coming in to your dataflow cluster - the used resources will be split among all the nodes, meaning that you can cover more use cases as on a single node. <-- Different nodes do not share or pool resources from all nodes in the cluster. If your dataflow(s) are built correctly the volume of data (FlowFiles) being processed will be distributed across all your nodes along each node to process a smaller subset of the overall FlowFile volume. This means more resources available across yoru cluster to handle more volume. NEW -- A NiFi cluster can be accessed via any one of the member nodes. No matter which node's UI you access, you will be presented with stats for all nodes. There is a cluster UI accessible from the global menu that allows you to see a breakdown of each node. Any changes you make from the UI of any one of the member nodes will be replicated to all nodes. NEW -- Since all nodes run their own copy of the flow, a catastrophic node failure does not mean loss of all your work since the same flow.json.gz (contains everything related to your dataflows) can be retrieved from any of the other nodes in your cluster. CONs: - complex setup as it requires a Zookeeper + plenty of other config files. <-- NiFi cluster requires a multi node zookeeper setup. Zookeeper quorum is required for cluster stability and also stores cluster wide state needed for your dataflow. Zookeeper is responsible for electing a node in your cluster with the Cluster Coordinator role and Primary node role. IF a node goes down that has been assigned one of these roles, Zookeeper will elected one of the still up nodes to the role - complex to manage --> analysis will be done on X nodes instead of a single node. <-- not clear. Yes you have multiple nodes and all those nodes are producing their own set of NiFi-logs. However, if a component within your dataflow is producing bulletins (exceptions) it will report all nodes or the specific node(s) on which bulletin was produced. Cloudera offers centralized management of your NiFi cluster deployment via Cloudera Manager software. Makes deploying and managing NiFi cluster to multiple nodes easy, sets up and configures Zookeeper for you, and makes securing your NiFi easy as well by generating the needed certificates/keystores for you. Hope this helps, Matt
... View more
05-26-2023
08:40 AM
Thank You @MattWho @SAMSAL for your wonderful suggestions. I achieved solution using JolttransformJSON processor, where i used below jolt specification [{ "operation": "default", "spec": { "*": { "error": "${putdatabaserecord.error}" } } }] Thanks again for your thought provoking answers..
... View more