Member since
07-30-2019
3406
Posts
1622
Kudos Received
1008
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 181 | 12-17-2025 05:55 AM | |
| 242 | 12-15-2025 01:29 PM | |
| 179 | 12-15-2025 06:50 AM | |
| 277 | 12-05-2025 08:25 AM | |
| 460 | 12-03-2025 10:21 AM |
10-19-2020
08:34 AM
@sarath_rocks55 If you are looking for assistance with an issue, please post a "question" in the community rather than adding a comment to a community article. Comments on community articles should be about the article content only. Thank you
... View more
10-13-2020
05:59 AM
@nikolayburiak Have you tried defining the keytab and principal directly in the the Hbase_2_ClientService configuration rather than using the KeytabCredentialsService to see if ticket renewal works correctly? This may get you pas the issue now and also help identify if issue is potentially with the controller services. Thanks, Matt
... View more
09-24-2020
11:34 AM
1 Kudo
@Jarinek I am not completely clear what you mean by "needs to be initialized by data". NiFi Processor components transfer FlowFiles between processors via connections. Those connections can consist of one or more relationships. Relationships are defined by each processor components code. There are many stock processors for ingesting data (for example: ListenTCP, ListenHTTP, QueryDatabase*, SelectHive*, etc...). From an input Processor component the FlowFile would be routed if successful to the "success relationship. That "success" relationship would be routed via a connection as input to your custom processor. Your custom processor code would then need to pull FlowFile(s) from the inbound connection(s) queue, process it and then place the resulting FlowFile on one or more relationships defined by your processor code based on the outcome of that processing. There are numerous blogs online with examples on building custom NiFi components: I suggest starting by reading the Apache NiFi developers guide: https://nifi.apache.org/developer-guide.html Then look at some sample blogs like: https://community.cloudera.com/t5/Community-Articles/Build-Custom-Nifi-Processor/ta-p/244734 https://www.nifi.rocks/developing-a-custom-apache-nifi-processor-json/ https://medium.com/hashmapinc/creating-custom-processors-and-controllers-in-apache-nifi-e14148740ea *** Note: Regardless of what you read in the above blogs, keep in mind the following: 1. Do NOT add your custom nar to the default NiFi lib directory. It is advisable that you define a custom lib directory in the nifi.properties file just for your custom components. Refer to the Apache NiFI Admin Guide for more detail: https://community.cloudera.com/t5/Community-Articles/Build-Custom-Nifi-Processor/ta-p/244734 2. Avoid building more functionality then needed in to a single processor component. It makes reuse in different use case harder. Hope this helps, Matt
... View more
09-21-2020
05:53 AM
1 Kudo
@dvmishra It always best to start a new thread/question rather than adding a new question to an existing thread that already has an excepted answer. As far as being able to reset the offset for a specific consumer group from within NiFi itself, this is not something that can be done via the ConsumeKafka processors. The offset is not stored by NiFi. Offsets for each consumer group are stored in Kafka. Would not make make much sense to build such an option in to a NiFi processor if it was possible. Every time the processor executes it would reset in that case which is probably not the desired outcome. There are numerous threads online around reseting the offset in Kafka you may want to explore. Here are a couple: https://gist.github.com/marwei/cd40657c481f94ebe273ecc16601674b https://gist.github.com/mduhan/0e0a4b08694f50d8a646d2adf02542fc If you can figure out how to accomplish reset via a custom script of external command, NiFi does offer several script execution and command line execution processors. You may be able to use these processors to execute your script to rest the offset in Kafka. Aside from above, you can change the "group id" (new consumer group) and change the "offset reset" to "earliest". Then restart processor to start consuming topic form beginning again as a different consumer group. Hope this helps, Matt
... View more
09-14-2020
09:20 AM
2 Kudos
@Umakanth From your shared log lines we can see two things: 1. "LOG 1" shows "StandardFlowFileRecord[uuid=345d9b6d-e9f7-4dd8-ad9a-a9d66fdfd902" and "LOG 2" shows "Successfully sent [StandardFlowFileRecord[uuid=f74eb941-a233-4f9e-86ff-07723940f012". This tells us these "RandomFile1154.txt" are two different FlowFiles. So does not look like RPG sent the same FlowFile twice, but rather sent two FlowFiles with each referencing the same content. I am not sure how you have your LogAttribute processor configured, but you should look for the log output produced by these two uuids to learn more about these two FlowFiles. I suspect from your comments you will only find one of these passed through your LogAttribute processor. 2. We can see from both logs that the above two FlowFiles actually point at the exact same content in the content_repository: "LOG 1" --> claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1599937413340-1, container=default, section=1], offset=1073154, length=237],offset=0,name=RandomFile1154.txt,size=237] "LOG 2" --> claim=StandardContentClaim [resourceClaim=StandardResourceClaim[id=1599937413340-1, container=default, section=1], offset=1109014, length=237],offset=0,name=RandomFile1154.txt,size=237] This typically happens when a FlowFile becomes cloned somewhere in your dataflow. For example: when a relationship from a processor is defined twice. Since you saw that GetFile only ingested file once, that rules out GetFile as the source of this duplication. But had it been GetFile, you would have not seen identical claim information. LogAttribute only has a single "success" relationship, so if you had drawn two connections with "Success" relationship defined in both, you would have seen duplicates of every ingested content. So this seems unlikely as well. Next you have your PutFile processor. This processor has both "success" and "failure" relationships. I suspect the "success" relationship is assigned to the connection going to your Remote Process Group" and the "failure" relationship assigned to a connection that loops back on the PutFile itself(?). Now if you had accidentally drawn the "failure" connection twice (one may be stack on top of the other), anytime a FlowFile failed in the putFile it would have been routed to one failure connection and cloned to other failure connection. Then on time they both processed successfully by putFile and you end up with the original and clone sent to your RPG. Hope this helps, Matt
... View more
09-14-2020
08:06 AM
1 Kudo
@dhhan74 The attached screenshot shows your running version as Apache NiFi 1.9.0. There is a known issue in that released that results in the ERROR condition you have encountered: https://issues.apache.org/jira/browse/NIFI-6285 This issue was addressed as of Apache NiFi 1.10. Hope this helps, Matt
... View more
08-14-2020
06:37 AM
@DivyaKaki The exception implies that the complete trust chain does not exist to facilitate a successful mutual TLS handshake between this NiFI and the target NiFi-Registry. NiFi uses the keystore and truststore configured in its nifi.properties and NiFi-Registry uses the keystore and truststore configured in its nifi-registry.properties files. Openssl can be used to public certificates for the complete trust chain: openssl s_client -connect <nifi-registry-hostname>:<port> -showcerts openssl s_client -connect <nifi-hostname>:<port> -showcerts for each public cert you will see: -----BEGIN CERTIFICATE-----
MIIESjCCAzKgAwIBAgINAeO0mqGNiqmBJWlQuDANBgkqhkiG9w0BAQsFADBMMSAw
HgYDVQQLExdHbG9iYWxTaWduIFJvb3QgQ0EgLSBSMjETMBEGA1UEChMKR2xvYmFs
U2lnbjETMBEGA1UEAxMKR2xvYmFsU2lnbjAeFw0xNzA2MTUwMDAwNDJaFw0yMTEy
MTUwMDAwNDJaMEIxCzAJBgNVBAYTAlVTMR4wHAYDVQQKExVHb29nbGUgVHJ1c3Qg
U2VydmljZXMxEzARBgNVBAMTCkdUUyBDQSAxTzEwggEiMA0GCSqGSIb3DQEBAQUA
A4IBDwAwggEKAoIBAQDQGM9F1IvN05zkQO9+tN1pIRvJzzyOTHW5DzEZhD2ePCnv
UA0Qk28FgICfKqC9EksC4T2fWBYk/jCfC3R3VZMdS/dN4ZKCEPZRrAzDsiKUDzRr
mBBJ5wudgzndIMYcLe/RGGFl5yODIKgjEv/SJH/UL+dEaltN11BmsK+eQmMF++Ac
xGNhr59qM/9il71I2dN8FGfcddwuaej4bXhp0LcQBbjxMcI7JP0aM3T4I+DsaxmK
FsbjzaTNC9uzpFlgOIg7rR25xoynUxv8vNmkq7zdPGHXkxWY7oG9j+JkRyBABk7X
rJfoucBZEqFJJSPk7XA0LKW0Y3z5oz2D0c1tJKwHAgMBAAGjggEzMIIBLzAOBgNV
HQ8BAf8EBAMCAYYwHQYDVR0lBBYwFAYIKwYBBQUHAwEGCCsGAQUFBwMCMBIGA1Ud
EwEB/wQIMAYBAf8CAQAwHQYDVR0OBBYEFJjR+G4Q68+b7GCfGJAboOt9Cf0rMB8G
A1UdIwQYMBaAFJviB1dnHB7AagbeWbSaLd/cGYYuMDUGCCsGAQUFBwEBBCkwJzAl
BggrBgEFBQcwAYYZaHR0cDovL29jc3AucGtpLmdvb2cvZ3NyMjAyBgNVHR8EKzAp
MCegJaAjhiFodHRwOi8vY3JsLnBraS5nb29nL2dzcjIvZ3NyMi5jcmwwPwYDVR0g
BDgwNjA0BgZngQwBAgIwKjAoBggrBgEFBQcCARYcaHR0cHM6Ly9wa2kuZ29vZy9y
ZXBvc2l0b3J5LzANBgkqhkiG9w0BAQsFAAOCAQEAGoA+Nnn78y6pRjd9XlQWNa7H
TgiZ/r3RNGkmUmYHPQq6Scti9PEajvwRT2iWTHQr02fesqOqBY2ETUwgZQ+lltoN
FvhsO9tvBCOIazpswWC9aJ9xju4tWDQH8NVU6YZZ/XteDSGU9YzJqPjY8q3MDxrz
mqepBCf5o8mw/wJ4a2G6xzUr6Fb6T8McDO22PLRL6u3M4Tzs3A2M1j6bykJYi8wW
IRdAvKLWZu/axBVbzYmqmwkm5zLSDW5nIAJbELCQCZwMH56t2Dvqofxs6BBcCFIZ
USpxu6x6td0V7SvJCCosirSmIatj/9dSSVDQibet8q/7UK4v4ZUN80atnZz1yg==
-----END CERTIFICATE----- Above is just example public cert from openssl command against google.com:443 You will need to make sure that every certificate in the chain when run agains NiFi UI is added to the truststore on NiFi-Registry and vice versa. You'll need to restart NiFi and NiFi-Registry before changes to your keystore or truststore files will be read in. Hope this helps, Matt
... View more
08-14-2020
06:11 AM
@Gubbi Since you comment that NiFi starts fine when you remove the flow.xml.gz, this points at an issue with loading /reading the flow.xml.gz file. I would suggest opening you flow.xml.gz in an xml editor and make sure it is valid. When NiFi starts it loads the flow.xml in to heap memory. From then on all changes made within the UI are made in memory and a flow.xml.gz is written to disk. I would be looking for XML special characters (http://xml.silmaril.ie/specials.html) that were not escaped in the XML that written out to the flow.xml.gz. Manually remove or correct the invalid flow.xml.gz may work. You may get lucky going back and trying multiple archived copies of flow.xml.gz files. Perhaps one still exist prior to the config change that was made that introduced the issue? Note: Be aware that starting NiFi without the flow.xml.gz will result in loss of all queued FlowFiles. Hope this helps, Matt
... View more
08-14-2020
05:51 AM
1 Kudo
@NY There are several bugs related content repository failing to clean-up under specific conditions. Based on your description and version of HDF, the most likely bugs are the following: https://issues.apache.org/jira/browse/NIFI-6846 https://issues.apache.org/jira/browse/NIFI-5771 Both of the above would result in cleanup occurring only on NiFi Restart once the condition occurs. If you are using the VolatileFlowFileRepository implementation which is uncommon, you may be hitting: https://issues.apache.org/jira/browse/NIFI-6236 While you are not hitting below jira because it was only introduced in Apache NIFi 1.9.1 (HDF3.4.0 - removed) and fixed in HDF 3.4.1, sharing for completeness here: https://issues.apache.org/jira/browse/NIFI-6150 All the above are addressed, except NIFI-6846 are addressed in HDF 3.4.1 (Apache NiFi 1.9.2). All above are fixed in HDF 3.5.1 (Apache NiFi 1.11.4). I recommend upgrading to HDF 3.5.x (latest) Hope this helps, Matt
... View more
07-15-2020
09:23 AM
@bhara While ldap/AD queries are generally case insensitive by default, NiFi is not case insensitive. So user "bbhimava" and user "Bbhimava" would be treated as two different users. Within the nifi.properties file you can utilize identity and group mapping patterns to manipulate the case of the returned user and groups strings. https://nifi.apache.org/docs/nifi-docs/html/administration-guide.html#identity-mapping-properties Note that mapping patterns are evaluate in alphanumeric order. First pattern that matches is applied. so make sure less specific patterns like "^(.*)$" are lats to be evaluated. For example: nifi.security.identity.mapping.pattern.dn=<java regex> Above would be evaluated before: nifi.security.identity.mapping.pattern.username=<java regex> When/If a match is found, the corresponding value is applied and transform performed: nifi.security.identity.mapping.value.dn=$1
nifi.security.identity.mapping.transform.dn=<NONE, LOWER, or UPPER> Looking at your ldap-user-group-provider configuration, I see the following needed changes: 1. I recommend user set the "Page Size" property. If the number of results exceeds the ldap max default page size, not all results may be returned to NiFi. BY setting a page size, NiFi will asks for results in multiple pages instead of one response with all results. Generally, defaults are either 500 or 1000, so setting a "Page Size" of 500 is safe. 2. The "User Search Scope" and "Group Search Scope" properties should be set to "SUBTREE" and not "sub". 3. The "User Group Name Attribute" property does not support a comma separated list of attributes. Suggest just setting it to "memberOf". Without sample output from your ldap/AD for user bbhimava and one of the groups you are trying to sync based upon, it would be impossible for me to validate any of your other settings. The log line shared below: o.a.n.w.a.c.AccessDeniedExceptionMapper identity[bbhimava], groups[none] does not have permission to access the requested resource. No applicable policies could be found. Returning Forbidden response. indicates that when NiFi looked for user "bbhimava" was looked for by the NiFi authorizer, no associated groups were returned. Which means Ranger policies could only be queried for policies assigned directly to "bbhimava". Adding the following line to your NiFi logback.xml file will give you debug output in your nifi-app.log when the ldap-user-group-provider executes to show you exactly what users and groups were returned based upon your settings and the resulting user/group associates that were discovered. <logger name="org.apache.nifi.ldap.tenants.LdapUserGroupProvider" level="DEBUG"/> Hope this helps, Matt
... View more