Member since
01-19-2017
3676
Posts
632
Kudos Received
371
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
853 | 03-23-2025 05:23 AM | |
473 | 03-17-2025 10:18 AM | |
1547 | 03-05-2025 01:34 PM | |
1054 | 03-03-2025 01:09 PM | |
1246 | 03-02-2025 07:19 AM |
06-05-2025
12:37 AM
@sydney- The SSL handshake error you're encountering is a common issue when connecting NiFi instances to NiFi Registry in secure environments it indicates that your NiFi instances cannot verify the SSL certificate presented by the NiFi Registry server. javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider. certpath.SunCertPathBuilder
Exception:
unable to find valid certification path to requested target Based on your description, there are several areas to address. The certificate used by NiFi Registry is self-signed or not issued by a trusted Certificate Authority (CA) The certificate chain is incomplete The truststore configuration is incorrect 1. Certificate Trust Configuration Verify Certificate Chain: # Check if certificate is in NiFi truststore (repeat for each instance)
keytool -list -v -keystore /path/to/nifi/truststore.jks -storepass [password]
# Check if certificate is in Registry truststore
keytool -list -v -keystore /path/to/registry/truststore.jks -storepass [password]
# Verify the Registry's certificate chain
openssl s_client -connect nifi-registry.example.com:443 -showcerts Ensure Complete Certificate Chain: Add the Registry's complete certificate chain (including intermediate CAs) to NiFi's truststore Add NiFi's complete certificate chain to the Registry's truststore # Add Registry certificate to NiFi truststore
keytool -import -alias nifi-registry -file registry-cert.pem -keystore /path/to/nifi/conf/truststore.jks -storepass [password]
# Add NiFi certificate to Registry truststore
keytool -import -alias nifi-prod -file nifi-cert.pem -keystore /path/to/registry/conf/truststore.jks -storepass [password] 2. Proper Certificate Exchange Ensure you've exchanged certificates correctly export NiFi Registry's public certificate keytool -exportcert -alias nifi-registry -keystore /path/to/registry/keystore.jks -file registry.crt -storepass [password] Import this certificate into each NiFi instance's truststore keytool -importcert -alias nifi-registry -keystore /path/to/nifi/truststore.jks -file registry.crt -storepass [password] -noprompt 3. NiFi Registry Connection Configuration In your NiFi instance (nifi.properties), verify # Registry client properties
nifi.registry.client.name=NiFi Registry
nifi.registry.client.url=https://nifi-registry.example.com/nifi-registry
nifi.registry.client.timeout.connect=30 secs
nifi.registry.client.timeout.read=30 secs Verify these configuration files in NiFi (production/development) # nifi.properties:
nifi.registry.client.ssl.protocol=TLS
nifi.registry.client.truststore.path=/path/to/truststore.jks
nifi.registry.client.truststore.password=[password]
nifi.registry.client.truststore.type=JKS In NiFi Registry # nifi-registry.properties:
nifi.registry.security.truststore.path=/path/to/truststore.jks
nifi.registry.security.truststore.password=[password]
nifi.registry.security.truststore.type=JKS 4. LDAP Configuration For your LDAP integration issues in authorizers.xml ensure you have <accessPolicyProvider>
<identifier>file-access-policy-provider</identifier>
<class>org.apache.nifi.registry.security.authorization.FileAccessPolicyProvider</class>
<property name="User Group Provider">ldap-user-group-provider</property>
<property name="Authorizations File">./conf/authorizations.xml</property>
<property name="Initial Admin Identity">cn=admin-user,ou=users,dc=example,dc=com</property>
<property name="NiFi Identity 1">cn=dev-nifi,ou=servers,dc=example,dc=com</property>
</accessPolicyProvider> In the authorizations.xml add appropriate policies for the dev-nifi identity <policy identifier="some-uuid" resource="/buckets" action="READ">
<user identifier="dev-nifi-uuid"/>
</policy> 5. Proxy Configuration For proxy user requests, add in nifi.properties nifi.registry.client.proxy.identity=cn=dev-nifi,ou=servers,dc=example,dc=com 6. Restart Order After making changes restart the Nifi instance in the below order NiFi Registry first Then restart all NiFi instances Happy hadoping
... View more
06-04-2025
11:36 PM
@hegdemahendra This is a classic case of off-heap memory consumption in NiFi. The 3G you see in the GUI only represents JVM heap + non-heap memory, but NiFi uses significant additional memory outside the JVM that doesn't appear in those metrics. Next time could you share your deployment YAML files that would help with solutioning Root Causes of Off-Heap Memory Usage: Content Repository (Primary Culprit) NiFi uses memory-mapped files for the content repository Large FlowFiles are mapped directly into memory This memory appears as process memory but not JVM memory Provenance Repository Uses Lucene indexes that consume off-heap memory Memory-mapped files for provenance data storage Native Libraries Compression libraries (gzip, snappy) Cryptographic libraries Network I/O libraries Direct Memory Buffers NIO operations use direct ByteBuffers Network and file I/O operations Possible Solutions: 1. Reduce JVM Heap Size # Instead of 28G JVM heap, try:
NIFI_JVM_HEAP_INIT: "16g"
NIFI_JVM_HEAP_MAX: "16g" This leaves more room (24G) for off-heap usage. 2. Configure Direct Memory Limit Add JVM arguments: -XX:MaxDirectMemorySize=8g 3. Content Repository Configuration In nifi.properties: # Limit content repository size
nifi.content.repository.archive.max.retention.period=1 hour
nifi.content.repository.archive.max.usage.percentage=50%
# Use file-based instead of memory-mapped (if possible)
nifi.content.repository.implementation=org.apache.nifi.controller.repository.FileSystemRepository 4. Provenance Repository Tuning # Reduce provenance retention
nifi.provenance.repository.max.storage.time=6 hours
nifi.provenance.repository.max.storage.size=10 GB Long-term Solutions: 1. Increase Pod Memory Limit resources:
limits:
memory: "60Gi" # Increase from 40G
requests:
memory: "50Gi" 2. Monitor Off-Heap Usage Enable JVM flags for better monitoring: -XX:NativeMemoryTracking=summary
-XX:+UnlockDiagnosticVMOptions
-XX:+PrintNMTStatistics 3. Implement Memory-Efficient Flow Design Process smaller batches Avoid keeping large FlowFiles in memory Use streaming processors where possible Implement backpressure properly 4. Consider Multi-Pod Deployment Instead of single large pod, use multiple smaller pods: # 3 pods with 20G each instead of 1 pod with 40G
replicas: 3
resources:
limits:
memory: "20Gi" Monitoring Commands: # Check native memory tracking
kubectl exec -it <nifi-pod> -- jcmd <pid> VM.native_memory summary
# Monitor process memory
kubectl top pod <nifi-pod>
# Check memory breakdown
kubectl exec -it <nifi-pod> -- cat /proc/<pid>/status | grep -i mem Start with reducing JVM heap to 16G and implementing content repository limits. This should immediately reduce OOM occurrences while you plan for longer-term solutions. Always remember to share your configuration files with the vital data masked or scramble. Happy hadooping
... View more
04-29-2025
08:15 AM
@Shelton We just followed Steps 1,3 4 and 5 to generate the automated report to Elasticsearch. It was pretty straight forward. Only things is we had to do was enable firewall in our Docker container and update Input Port's Access Policies. Thanks
... View more
04-28-2025
07:05 AM
@Shelton Please read my previous answer carefully. None of the properties provided by you are in hbase codebase
... View more
04-18-2025
01:14 AM
@Jay2021, Welcome to our community! As this is an older post, you would have a better chance of receiving a resolution by starting a new thread. This will also be an opportunity to provide details specific to your environment that could aid others in assisting you with a more accurate answer to your question. You can link this thread as a reference in your new post.
... View more
04-04-2025
08:29 AM
Thanks for the help and sorry for late reply @Shelton I am getting the output here but the values for parent class are not getting populated, they are displayed as NULL
... View more
03-27-2025
03:22 PM
Thanks @Shelton for details!! I will try Option 3 in our pipeline shell script. Will let you know if any further issues.
... View more
03-24-2025
03:58 AM
Thank you so much for helping me out.
... View more
03-23-2025
04:42 AM
@spserd Looking at your issue with Spark on Kubernetes, I see a clear difference between the client and cluster deployment modes that's causing the "system" authentication problem. the issue is when running in client mode with spark-shell, you're encountering an authorization issue where Spark is trying to create executor pods as "system" instead of using your service account "spark-sa", despite providing the token. Possible Solution For client mode, you need to add a specific configuration to tell Spark to use the token for executor pod creation --conf spark.kubernetes.authenticate.executor.serviceAccountName=spark-sa So your updated command should look like this ./bin/spark-shell \ --master k8s://https://my-k8s-cluster:6443 \ --deploy-mode client \ --name spark-shell-poc \ --conf spark.executor.instances=1 \ --conf spark.kubernetes.container.image=my-docker-hub/spark_poc:v1.4 \ --conf spark.kubernetes.container.image.pullPolicy=IfNotPresent \ --conf spark.kubernetes.namespace=dynx-center-resources \ --conf spark.driver.pod.name=dynx-spark-driver \ --conf spark.kubernetes.authenticate.driver.serviceAccountName=spark-sa \ --conf spark.kubernetes.authenticate.executor.serviceAccountName=spark-sa \ --conf spark.kubernetes.authenticate.submission.oauthToken=$K8S_TOKEN The key is that in client mode, you need to explicitly configure the executor authentication because the driver is running outside the cluster and needs to delegate this permission. If this still doesn't work, ensure your service account has appropriate ClusterRole bindings that allow it to create and manage pods in the specified namespace. Happy hadooping
... View more