Member since 
    
	
		
		
		07-30-2019
	
	
	
	
	
	
	
	
	
	
	
	
	
	
			
      
                3381
            
            
                Posts
            
        
                1616
            
            
                Kudos Received
            
        
                998
            
            
                Solutions
            
        My Accepted Solutions
| Title | Views | Posted | 
|---|---|---|
| 295 | 10-20-2025 06:29 AM | |
| 435 | 10-10-2025 08:03 AM | |
| 326 | 10-08-2025 10:52 AM | |
| 338 | 10-08-2025 10:36 AM | |
| 383 | 10-03-2025 06:04 AM | 
			
    
	
		
		
		09-22-2025
	
		
		05:32 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @HoangNguyen     As long as you are running a new enough version of Apache NiFi, you'll have an option with the process group configuration to set a logging suffix.      For each process group you want a separate log file, create a unique suffix.    In above example I used the suffix "extracted".  In my NiFi "logs" directory, I now have a new "nifi-app-extracted.log" file that contains the logging output of every component contained within that process group.      Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped.  Thank you,  Matt 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-18-2025
	
		
		08:26 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @asand3r       JVM Garbage collection is stop-the-world which would prevent for the duration of that GC event  the Kafka clients from communicating with Kafka. If that duration of pause is long enough I could cause Kafka to do a rebalance.  I can't say that you are experiencing that .   Maybe put the consumeKafka processor class in INFO level logging and monitor the nifi-app.log for any indication of rebalance happening.    When it comes GC pauses, a common mistake I see is individuals setting the JVM heap settings in NiFi way to high simply because the server on which they have in stalled NiFi has a lot of installed memory.   Since GC only happens once the allocated JVM memory utilization reaches around 80%, large heaps could lead to long stop-the-world if there is a lot top clean-up to do.    Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped.  Thank you,  Matt 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-16-2025
	
		
		11:51 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
	
		1 Kudo
		
	
				
		
	
		
					
							 @AlokKumar   Then you'll want to build your dataflow around the HandleHTTPRequest and HandleHTTPResponse processors.   You build your processing between those two processors or maybe you have multiple HandleHTTPResponse processors to control the response to the request based on the outcome of your processing.     Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped.  Thank you,  Matt 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-16-2025
	
		
		11:41 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @AlokKumar     NiFi FlowFiles consist of two parts:   FlowFile Metadata/Attributes - stored in the flowfile_repository, it holds metadata about the FlowFile and attributes added to the FlowFile by processors.   FlowFile Content - Stored within content claims within the content_repository. A single content claim may hold the content for one too many FlowFiles. Part of a FlowFile's metadata includes the location of the content claim, the starting byte of the content and total number of bytes.   There is also a claimant count associated with each content claim. It is incremented for every active FlowFile (a FlowFile still present with a queue on the NiFi canvas) that references content stored in that claim. One a FlowFile reaches a point of auto-termination within a dataflows, the claimant count on the content claim it references is decremented.  Once the claimant count reaches zero, the claim is eligible for archive and removal/deletion. Content claims are immutable (can not be modified once created). Any NiFI processor that modifies or creates new content writes that content to a new content claim.   Archived content claims are moved to "archive" subdirectories within the content_repository. Archiving can be disable which means that content claims where claimant count is zero are immediately deleted. A background archive thread monitors archived content claims and deletes them based on archive retention settings in the nifi.properties file.  A common misunderstanding is how the "nifi.content.repository.archive.max.usage.percentage".  Lets say it is set to 80%.  Once this disk where the content_repository resides reaches 80% capacity, archive will start purging archived content claims to attempt to bring disk usage below that 80%.  If all archived content claims have been deleted, NiFi will continues to allow new content claims to be created potentially leading to disk being 100% full. For this reason it is VERY important that the content_repository is allocated to its own physical or logical disk.  File System Content Repository Properties  Understanding-how-NiFi-Content-Repository-Archiving-works   With NiFi provenance you are seeing Provenance event data which includes metadata about the FlowFile, If the content claim referenced by the FlowFile in the provenance event no longer exists on the content_repository (either inside archive subdirectory or outside archive), you'll have no option to replay or view the content.  Provenance is written to its own provenance_repository directory and its retention is also configurable in the nifi.properties file.   Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped.  Thank you,  Matt    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-15-2025
	
		
		10:41 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @ShellyIsGolden     What you describe here sounds like the exact use case for using NiFi's parameter contexts.  Parameters can be used in any NiFi component property.  They make it easy to build a dataflow in your dev environment and then move that dataflow to test or prod environments that have the same parameter contexts but with different values.  This even works when using a shared NiFi-Registry to version control your ready dataflows for another environment.    Lets say you create a "Parameter Context" and associate that created parameter context with a Process Group(s). Now you can configure a property in a processor for example and click on "convert to parameter" icon to convert that value into a parameter within yoru parameter context.      Lets say you create a Parameter context with name "PostgresQL parameters".  Then you can configure your Process Group (PG) to use that parameter context:      Now you can configure/convert your component properties that are unique per NiFi deployment environment to using a parameter.      Let's say you are ready to move that flow to another environment like prod.  So you version control that PG on dev to NiFi-Registry.  Then on Prod you connect to that same NiFi-Registry and import that dataflow.  When that flow is loaded in Prod, if a parameter context with the exact same name "PostgresQL parameters" already exists, that imported flow will use that parameter context's values.  This eliminates the need to manage these configuration all over the place in yoru dataflows.   You can also open your parameter context and edit a values and NiFi will take care of stopping and starting all the affected components.  Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped.  Thank you,  Matt    
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-15-2025
	
		
		06:36 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @asand3r     With your ConsumeKafka processor configured with 5 concurrent tasks and a NiFi cluster with 3 nodes, you will have 15 (3 nodes X 5 concurrent tasks) consumers in your consumer group. So Kafka will assign two partitions to each consumer in that consumer group. Now if there are network issues, Kafka may do a rebalance and assign more partitions to fewer consumers. (Of course consumers in a consumer group changes if you have additional consumeKafka processors pointing at same topic and configured with same consumer group id.      Matt 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-12-2025
	
		
		11:41 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Alexm__   While i have never done anything myself with Azure DevOps pipelines, I don't see why this would not be possible.  Dev, test, prod environments would likely have slight variations in NiFi configurations (source and target service URLs, passwords/usernames, etc).  So when designing your Process Group dataflows you'll want to take that into account and utilize NiFi's Parameter contexts to define such variable value configuration properties.  Sensitive properties (passwords) are never passed to NiFi-Registry.  So any version controlled PG imported to another NiFi will not have the passwords set. Once you version control that PG, you can deploy it through rest-api calls to other NiFi deployments.  First time it is deployed it will simply import the parameter context used in source (dev) environment.  You would need to modify that parameter context in test, and prod environments to set passwords and alter any other parameters as needed by each unique env.   Once the modified parameter context of same name exists in the other environments, promoting new versions of dataflows that use that parameter context becomes very easy.  The updated dataflows will continue to use the local env parameter context values rather then those used in dev.  If a new parameter is introduced to the parameter context, is simply gets added to the existing parameter context of the same name in test and prod envs.      So there will be some consideration in your automated promotion of version controlled dataflows between environments to consider.   Versioning a DataFlow  Parameters in Versioned Flows     Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped.  Thank you,  Matt 
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-12-2025
	
		
		08:54 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @carange     Welcome to the Cloudera Community.    Opening a community questions exposes your query to anyone who access the Cloudera community site and is a great place to ask very specific issue questions or how-to type questions.  Responses may come from any community member (may or may not be a Cloudera employee).  For more in depth issues or time sensitive issues where sharing logs or sensitive information would be very helpful, creating a support case is the best option.  Or if the suggestions and answers provided in the community are not completely solving your issue.  Only individuals with a Cloudera product license can create support cases.    With a Cloudera license you are able to raise support cases from MyCloudera that will get assigned to the appropriate support specialist for your issue.    Simply open a browser to https://lighthouse.cloudera.com/s/ and login with your Cloudera credential via the following icon:      You can then float over the "Support" option and select "cases". This will take you to a new page where you will see an option to "Create A Case":       Select a "Technical assistance" case type and follow the prompts to provide the necessary information to submit your case details.  You'll have the ability to upload images, logs, etc to your new case.     If you have issues creating a case, please reach out to your Cloudera Account owner.      Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped.  Thank you,  Matt       
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-12-2025
	
		
		05:59 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @Alexm__   In order for NiFi to communicate with NiFi-Registry, NiFi needs to have "NiFiFlowRegistryClient" added to "Registry Clients" section in NIFi under Controller settings.          A SSL Context Service (in which you can define a specific keystore and truststore that may or may not be the same keystore and truststore your NiFi uses) will be needed since a mutualTLS handshake MUST be successful between NiFi and NiFi-Registry.    So for your question, as long as there is network connectivity between your NiFi(s) and the NiFi-Registry, this can work. Your "user identity(s)" in NiFi that will be authorized to perform version control will also need to be authorized in your NiFi-Registry to specific buckets.  Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped.  Thank you,  Matt       
						
					
					... View more
				
			
			
			
			
			
			
			
			
			
		
			
    
	
		
		
		09-10-2025
	
		
		10:12 AM
	
	
	
	
	
	
	
	
	
	
	
	
	
	
		
	
				
		
			
					
				
		
	
		
					
							 @nifier Sharing the details of the rest-api calls you made that are not working along with the specific Apache NiFi version being used may be helpful in providing guidance here.   What response are you getting to your rest-ai calls?  What do you see in the nifi-user.log and/or nifi-app.log when you execute your rest-api calls?  How are you handling user authentication in your rest-api-calls (certificate, bearer token, etc)?   rest-api call:  https://<nifinode>:<nifiport>/nifi-api/processors/<Processor  UUID>/run-status -X PUT -H 'Content-Type: application/json' --data-raw '{"revision":{"clientId":"<ID>","version":<version num>},"state":"<RUNNING, STOPPED, or RUN_ONCE>","disconnectedNodeAcknowledged":false}' --insecure  Above would also need a client auth piece.  What may be helpful to you is utilizing the developer tools in your web browser to capture the rest-api calls made as you perform the actions via the NiFi UI.  Most developer tools give you the option to "copy as curl" the request that was made.      Please help our community grow. If you found any of the suggestions/solutions provided helped you with solving your issue or answering your question, please take a moment to login and click "Accept as Solution" on one or more of them that helped.  Thank you,  Matt 
						
					
					... View more