Member since
07-30-2019
3392
Posts
1618
Kudos Received
1001
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 418 | 11-05-2025 11:01 AM | |
| 310 | 11-05-2025 08:01 AM | |
| 449 | 11-04-2025 10:16 AM | |
| 666 | 10-20-2025 06:29 AM | |
| 806 | 10-10-2025 08:03 AM |
06-10-2025
05:35 AM
@agriff I did not know that you were using the Apache NiFi 2.x release. The component list I provided is from the Apache NiFi 1.x release. NiFi 2.x switched from having numerous client version Kafka based processors to single Kafka based processors that now use a KafkaConnectionService controller service component to define the kafka client version. In Apache NiFi the only connection service included is for theKafka 3 Client. The Kafka client 3 I understand to be backwards compatible to Kafka 2.6, but sounds like you are having success with using it for Kafka 2.5. Glad to hear you were able to resolve yoru underlying schema issue. Setting Bulletins level on a processor has absolutely nothing to do with log levels written to the nifi-app.log. It only controls what level bulletins are created within the NiFi UI. To change logging within the NiFi logs, you will need to modify the logback.xml configuration file found in the NiFi conf directory. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
06-09-2025
06:39 AM
@nifier I would not expect much difference between making the stop request via the NiFi UI or via a rest-api call. Even when you make a request to stop components via the NiFi UI, the UI may quickly show the "stopped" icon on the component, but any active threads are not killed in that process. In fact the processor is considered "stopping" until all its active threads complete however long that takes. While still in the state of stopping, you can not modify those components. A component is considered stopping if its "activeThreadCount" is not 0. when you are executing your rest-api script without the delay, what exception are you encountering? This one? unable to fulfill this request due to: Cannot start component with <component id> because it is currently stopping Above means you have active threads. Perhaps you can build a wait loop around above response until the active threads complete. Or you can capture that component id and execute a terminate threads command on it. ../nifi-api/processors/<component id>/threads -X DELETE Terminating threads will not cause data loss. NiFi is not killing any threads in this process, only way to kill threads is via a NiFi restart. Terminating threads on component just shifts the thread to dev null and unhooks it from the FlowFile(s) it is associated with in the inbound connection. When the processor is restarted, the FlowFile(s) will be reprocessed by the component. Should the "terminated" thread complete execution its logging and output just goes to dev null and results are not written back to a FlowFile, but depending on processor it could end up in duplicate data on a destination system if the tread is sending data out of NiFi since NiFi will reprocess the FlowFile originally associated with that terminated thread next time processor is started. The other option is to get the status of components for the process group you stopped and parse the json for any "activeThreadCount" were count is not 0 and wait 1 sec and make request again and then repeat this loop until all are 0 before making your next rest-api call. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
06-06-2025
11:45 AM
@shiva239 1. If you are building your own custom components for NiFi, I suppose you can have them do whatever you want. But considering your use case, you would be better off building a custom processor rather then a custom controller service. For example, building a custom version of the PutDatabaseRecord processor that instead of using a connection pool controller service, makes a a direct connection for each record. 2. I have nothing setup to test those settings, but based on setting there is still opportunity for connection reuse with multiple NiFi FlowFiles. There is the 1 sec between when one processing ends and the next may start that may grab the connection that is idling for 1 sec. Keep in mind that there is nothing in the DBCPConnectionPool code that would prevent the sever from killing closing connections at end of transaction. That is the whole purpose of the "ValidationQuery" existence. It is not common that server side closes connection. So when the DBCPConnectionPool tries to give an connection from the pool to a requesting processor, it runs the validation query to make sure the connection is still active. If validation query fails, that one is dropped from pool a new connection is made. I don't think "Max Idle Connections" is going to do anything since you set "Min idle connections" to zero which means "zero to allow no idle connections". - Can you clarify what does -1 indicate? Does it mean no limit on the lifetime of a connection? <-- yes The setting you have sound solid, but I would still set a validation query to ensure avoiding any chance of a race condition scenario where a 1 sec idle connection ends up getting reused that may already be a closed connection. The processor would just sit there assuming the connection was good waiting for a return. But with min idle connections set to 0, this may not be an issue. I have not tested with this specific setup ever. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
06-05-2025
09:30 AM
Hello @agriff Welcome to the community. I find it very odd that no ERROR logging is being produced when your PublishKafka processor is routing FlowFiles to the Failure relationship. Keep in mind that NiFi does not control the logging in the third party libraries that a given processor type may use. So it is possible that the Kafka client library for the specific PublishKafka version has not DEBUG logging. This is not specific to only publishKafka, but can be the case for any processor component that is dependent on a third party client library to which the open source community has no ability to modify. The "PublishKafka" with no version number in its name is the oldest of all the client versions. It was the first built and was deprecated some time ago because of its version age. If you Kafka server is newer then 0.8, you'll want to be using a different version of this processor. There are so many version of the Kafka based processors because of version client/server incompatibility between versions. What version of Kafka are you publishing to? What version of Apache NiFi are you using? Which PublishKafka processor are you using? There are numerous that use different Kafka Client library versions. You'll want to use the appropriate one that aligns with your Kafka server version? Changing the "bulletin level" within the processor has not affect on the log level for that processor class in the nifi-app.log. To set this processor class to DEBUG in the nifi-app.log, you'll need to modify the logback.xml in the NiFi conf directory: Example logger line you would add to logback.xml ith rest of existing loggers: <logger name="org.apache.nifi.processors.kafka.pubsub.PublishKafka" level="DEBUG"/> The class name will vary by processor: org.apache.nifi.processors.kafka.pubsub.PublishKafka org.apache.nifi.processors.kafka.pubsub.PublishKafka_0_10 org.apache.nifi.processors.kafka.pubsub.PublishKafka_0_11 org.apache.nifi.processors.kafka.pubsub.PublishKafka_1_0 org.apache.nifi.processors.kafka.pubsub.PublishKafka_2_0 org.apache.nifi.processors.kafka.pubsub.PublishKafka_2_6 org.apache.nifi.processors.kafka.pubsub.PublishKafkaRecord_0_10 org.apache.nifi.processors.kafka.pubsub.PublishKafkaRecord_0_11 org.apache.nifi.processors.kafka.pubsub.PublishKafkaRecord_1_0 org.apache.nifi.processors.kafka.pubsub.PublishKafkaRecord_2_0 org.apache.nifi.processors.kafka.pubsub.PublishKafkaRecord_2_6 Sharing your dataflow and processor configuration might also be helpful to your query. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
06-04-2025
06:43 AM
@Artem_Kuzin I suggest starting with logging in to Ranger UI and verifying under "Audit" --> "Plugin Status" that your HDFS and Hive services are reported as having downloaded and made active the latest updated policies. If they have not, I start checking the HDFS and Hive logs for any logging related to issues connecting or fetching policies json from Ranger. Beyond above, I'd recommend that you open a support case with Cloudera (assuming you have a valid support license) where you can securely share your configuration and logs for more in-depth troubleshooting assistance with this issue. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
06-03-2025
01:08 PM
@lynott It sounds like you created wildcard certificates toi use with your NiFi and NiFi-Registry services/instances. I strongly discourage this from a security standpoint. NiFi utilizes its certificates to perform both clientAuth and ServerAuth. When used to perform clientAuth, such as connect to NiFi-Registry, the clientAuth CN is presented as the client/user identifier. This means the server side (NiFi-Registry in this use case) would need to authorize that wildcard CN which exposes your NiFi-Registry to unauthorized access by any clientAuth certificate that matches against that wildcard authorized entity (*.example.com). While I recommend and encourage creation off unique certificates per node with a shared common name as one of the SAN entries as a security best practice. It is possible to create once certificate that contains SAN entries for every host on which hat certificate is used. However, the CN in that one certificate should not use wildcards, so that authorizations can not resolve to any other value then the expected non wildcard CN. Looking back at your nifi-registry.properties file, I can se that you have not https://nifi.apache.org/nifi-docs/administration-guide.html#identity-mapping-properties configured. nifi.registry.security.identity.mapping.pattern.<somestring>=
nifi.registry.security.identity.mapping.value.<somestring>=
nifi.registry.security.identity.mapping.transfrom.<somestring>= Identity Mapping Properties are used to manipulated user/client identities post authentication and pre-authorization. Without using identity.mapping.patterns, the complete clientAuth DistinquishedName (DN) will be passed to the authorizer in NiFi where it is expected to be authorized to Can Proxy Request (read, write, delete) and Can Manage Buckets (Read). Since you aer are using a wildcard DN, authorizing the NiFi node hostnames in NiFi-Registry is going to accomplish nothing since those hostnames can't be extarcted from the DN for authorization. Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
06-03-2025
05:55 AM
@lynott @sydney- Are you good now? Your last response contains no questions. If my previous response(s) helped with your initial issue, please a take a moment to click accepted on those responses. Thank you, Matt
... View more
06-03-2025
05:48 AM
1 Kudo
@shiva239 NiFi's DBCPConnectionPool controller services is designed to create a pool of connections that are created the first time it is invoked. These connections can then be used by multiple components on the canvas that are configured to use this same connection pool. This is designed to maximize performance of the dataflows. You can control the behavior of the connection pool using the configuration properties available to this controller service: Max Idle Connections Minimum Evictable Idle Time Minimum Idle Connections Soft Minimum Evictable Idle Time Time Between Eviction Runs Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
06-02-2025
05:18 AM
1 Kudo
@asand3r Your issue is caused by a misconfiguration in the authorizers.xml file here: <userGroupProvider>
<identifier>composite-configurable-user-group-provider</identifier>
<class>org.apache.nifi.registry.security.authorization.CompositeUserGroupProvider</class>
<property name="User Group Provider 0">file-user-group-provider</property>
<property name="User Group Provider 1">ldap-user-group-provider-1</property>
<property name="User Group Provider 2">ldap-user-group-provider-2</property>
<property name="User Group Provider 3">ldap-user-group-provider-3</property>
<property name="User Group Provider 4">ldap-user-group-provider-4</property>
</userGroupProvider> The wrong "class" is being used and the wrong property name is being used for the file-user-group-provider. It should look like this: <userGroupProvider>
<identifier>composite-configurable-user-group-provider</identifier>
<class>org.apache.nifi.registry.security.authorization.CompositeConfigurableUserGroupProvider</class>
<property name="Configurable User Group Provider">file-user-group-provider</property>
<property name="User Group Provider 1">ldap-user-group-provider-1</property>
<property name="User Group Provider 2">ldap-user-group-provider-2</property>
<property name="User Group Provider 3">ldap-user-group-provider-3</property>
<property name="User Group Provider 4">ldap-user-group-provider-4</property>
</userGroupProvider> The "class" needs to be: org.apache.nifi.registry.security.authorization.CompositeConfigurableUserGroupProvider The above class support one defined "configurable user group provider". A configurable user group provider (file-user-group-provider) is one that allows manual manipulation via the NiFi/NiFi-Registry UI. Please help our community thrive. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more
05-30-2025
06:05 AM
@Ripul Welcome to the Cloudera Community! Sharing a screenshot would be helpful here, but I am assume what you are seeing is something like this when you login with your admin user or other users: This is because of an authorization issue. When NiFi is started for the first time it does not have a flow.josn.gz file yet which contains everything you see on the NiFi canvas. So NiFi will generate that flow.josn.gz which will consist of just a root process group. You'll notice on the cavas the above "Operation" panel. It will show the current selected component on the canvas. With nothing selected on the canvas, it will show details for whichever NiFi Process Group you are currently displaying. Since this is a new install, what the Operation panel is showing is this generated root process group. Anytime you see the name as just the UUID for a component, it indicates the currently authenticated user is not authorized to view that component. A greyed out "gear" (configuration) icon indicates user is not authorized to modify the component. A greyed out "key" (Access Policies) icon indicates currently authenticated user is not authorized to view and maybe modify policies (authorizations) in that component. NiFi provides very granular authorization control all the way down to the individual component level. This may sound like a lot to need to manage; however, there is policy inheritance in place. Example: You add a processor to the canvas. If not explicit policy is defined on the processor itself it will inherit policy from the process group it is inside. If there is no policy defined on the process group, it will inherit policy from parent process group. At the very top level is the above mentioned parent process group. So setting policies on the parent process group will control access on everything added to cavas until ab explicit access policy is set on a sub component. There are also global policies that can be setup and your "admin" user should have been setup on a number of these. From the above global menu found in upper right corner you should see that "Policies" is not greyed out for your admin user. Within global "Policies", all users need to be granted "view the user interface" in order to access the user interface, so it sounds like you have already done this for other users. Your "admin" user should also have "access all policies" (view and modify) which allows that user to view and modify access policies (authorizations) on every component anywhere on the canvas. This policy is what makes the "key" icon not greyed out on the "Operation" panel mentioned earlier. So to give select users (including your admin user) the ability to add components to the root process group, your admin user will need to select the key icon on the root process group and grant those users: Once your admin user and other users are properly authorized to "view the component", the Operate panel will show the process group name instead of just the process group assigned UUID. The gear icon will not be greyed out once your admin user and other users have "modify the component". "Modify the component" on a process group will also allow added users to see the component adding icon a the top of the UI. I am not going to cover all the NiFi Policies, but they can be found in the NiFi Administration guide under Configuring Users & Access Policies Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped. Thank you, Matt
... View more