Member since
01-19-2017
3652
Posts
623
Kudos Received
364
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
176 | 12-22-2024 07:33 AM | |
113 | 12-18-2024 12:21 PM | |
442 | 12-17-2024 07:48 AM | |
298 | 08-02-2024 08:15 AM | |
3584 | 04-06-2023 12:49 PM |
01-06-2025
12:32 AM
@spiker According to the below zk parameter you have set it to false meaning you are using external and not embedded zk is that the case? # Specifies whether or not this instance of NiFi should run an embedded ZooKeeper server nifi.state.management.embedded.zookeeper.start=false Yet the below zk config seems contradictory # zookeeper properties, used for cluster management # nifi.zookeeper.connect.string=zookeeper:2181 # Zookeeper should resolve to correct host(s) for the Zookeeper ensemble Check this documentation for setting external zookeepers If you are using embedded zk the adjust the following entries in your nifi.properties nifi.state.management.embedded.zookeeper.start=true
nifi.zookeeper.connect.string=IP01:2181,IP02:2181,IP03:2181
nifi.zookeeper.auth.type=default
nifi.remote.input.host=IP01 # Localhost ip
nifi.remote.input.secure=false
nifi.remote.input.socket.port=9998
nifi.remote.input.http.enabled=true # set true if you want http
nifi.cluster.is.node=true
nifi.cluster.node.address=IP01 # Localhost ip
nifi.cluster.node.protocol.port=7474
nifi.web.http.host=IP01 # Localhost ip. use either https or http
nifi.web.http.port=8443
nifi.cluster.load.balance.port=6342 zookeeper.properties This file contains additional info to be used by zookeeper to know about the servers. server.1=IP01:2888:3888
server.2=IP02:2888:3888
server.3=IP03:2888:3888
clientPort=2181 In order to maintain the nifi state across instances,you need to modify the state-management.xml and provide a new state provider pointing to zookeeper. <cluster-provider>
<id>zk-provider</id> <class>org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider</class>
<property name="Connect String">ip1:2181,ip2:2181,ip3:2181</property>
<property name="Root Node">/nifi</property>
<property name="Session Timeout">10 seconds</property>
<property name="Access Control">Open</property>
</cluster-provider> Here the Access Control has been set to open to be able to login without an username/pass but you should configure your to use your oidc-provider I would assume . OpenId Connect SSO Properties Does the user email exist in the OIDC token and is accessible check the Azure AD and confirm the token contains the expected email and upn nifi.security.user.oidc.claim.identifying.user={email}
nifi.security.user.oidc.fallback.claims.identifying.user=upn Can you ensure the OPENID discovery URL is reachable from the NiFi nodes and resolves correctly run the below curl to confirm connectivity curl -v https://login.microsoftonline.com/XXXXXXXXXXXXXXXXXXXXXXX/v2.0/.well-known/openid-configuration Validate HTTPS and OIDC curl -vk https://<nifi-node>:8443/nifi-api/
curl -vk https://<nifi-node>:8443/nifi-api/access/oidc/callback Clear Cache Stop NiFi on all nodes and clear state directory ./conf/state-management.xml and restart the cluster Hope that helps Happy hadooping
... View more
01-04-2025
06:13 AM
@spiker Can you quickly do the below steps and revert 1. Stop NiFi ./bin/nifi.sh stop 2. Backup configuration files cp conf/authorizations.xml conf/authorizations.xml.backup
cp conf/users.xml conf/users.xml.backup 3. Clear login identity provider cache rm -rf ./state/local/login-identity-providers/ 4. Verify file permissions chown -R nifi:nifi ./conf/
chmod 660 conf/authorizations.xml
chmod 660 conf/users.xml 5. Start NiFi ./bin/nifi.sh start 6. Check Logs for Additional Details tail -f logs/nifi-app.log If these steps don't resolve the issue, please check and share the full stack trace from nifi-app.log Happy hadooping
... View more
01-03-2025
10:15 AM
1 Kudo
@spiker Can you also share your nifi.properties?
... View more
01-03-2025
07:18 AM
@spiker Can you share the logs?
... View more
01-03-2025
02:20 AM
1 Kudo
@spiker The error you're encountering in Apache NiFi logs suggests a configuration issue related to OpenID Connect (OIDC) authentication and proxy settings within your Kubernetes environment. Initial Request → Authentication Failed → JSON Parse Error → Internal Server Error : "Untrusted proxy" Here is a beautiful document that you should go through for maybe a eureka moment Securing NiFi with Existing CA Certificates Ensure that NiFi trusts the proxy making the request. nifi.security.whitelisted.proxy.hostnames=172\.24\.0\.3 Check the SSL certificates used by NiFi and ensure the truststore is correctly configured. nifi.security.truststore=/path/to/truststore.jks nifi.security.truststoreType=JKS nifi.security.proxy.enabled=true Check NiFi permission chown -R nifi:nifi /path/to/truststore.jks chmod 640 /path/to/truststore.jks # Import the proxy certificate into NiFi's truststore keytool -import -alias proxy-cert -file proxy.crt -keystore truststore.jks Verify truststore contains proxy certificate keytool -list -v -keystore truststore.jks Ensure that OIDC is properly set up in the nifi.properties file add the following properties nifi.security.user.oidc.redirect.url=https://<nifi-host>:8443/nifi-api/access/oidc/callback nifi.security.user.login.identity.provider=oidc-provider Validate Kubernetes Ingress and Service as you are using Kubernetes ingress or a service, ensure headers and SSL information are properly forwarded. Enable detailed logging in NiFi add this in the logback.xml to identify specific issues with headers or tokens org.apache.nifi.web.security.level=DEBUG Restart NiFi ./bin/nifi.sh restart Testing with Curl to simulate an API call to validate the request flow curl -k -H "Authorization: Bearer <your-access-token>" https://<nifi-host>:8443/nifi-api/flow/current-user Note: Please verify that the Kubernetes environment has the necessary DNS resolution, network connectivity, and the correct OpenID Connect metadata URL. Happy hadooping
... View more
01-02-2025
10:59 AM
@tuyen123 If you have installed other applications or dependencies for Spark, Hive, etc. that use a different version of protobuf, the conflict can cause issues with the block report. Locate conflicting protobuf JARs find $HADOOP_HOME -name "protobuf*.jar" Check if there are multiple versions present in $HADOOP_HOME/lib or other dependency paths. Remove conflicting jars Keep only the protobuf JAR version that matches your Hadoop distribution e.g.protobuf-java-2.5.0.jar Alternatively, explicitly set the protobuf version in your CLASSPATH. If third-party libraries are included in your Hadoop environment, they might override the correct protobuf version. Open $HADOOP_HOME/etc/hadoop/hadoop-env.sh and prepend the correct protobuf library: export HADOOP_CLASSPATH=/path/to/protobuf-java-2.5.0.jar:$HADOOP_CLASSPATH Verify Classpath hadoop classpath | grep protobuf Ensure it includes the correct protobuf JAR. Please try that and revert. Happy hadooping
... View more
12-31-2024
09:47 AM
1 Kudo
@MrNicen This is a very common problem where the table gets stuck in a DISABLING state. First, please try these series of diagnostic and repair steps: First, verify the current state: echo "scan 'hbase:meta'" | hbase shell Try to force the table state change using HBCK2: # Set table to ENABLED state hbase hbck -j ./hbase-hbck2-2.0.2.jar setTableState <table_name> ENABLED # Download HBCK2 if not already present wget https://repository.apache.org/content/repositories/releases/org/apache/hbase/hbase-hbck2/2.0.2/hbase-hbck2-2.0.2.jar If that doesn't work, try cleaning the znode: # Connect to ZooKeeper ./zkCli.sh -server localhost:2181 # Check the table znode ls /hbase/table/<table_name> # Delete the table znode if present rmr /hbase/table/<table_name> If the issue persists, try manually updating the meta table: hbase shell # Disable table disable '<table_name>' # Wait a few seconds, then enable enable '<table_name>' # If that fails, try force disable disable_all '<table_name>' If still stuck, try these repair commands: # Clear the META table state echo "put 'hbase:meta', '<table_name>', 'table:state', '\x08\x00'" | hbase shell # Recreate the regions hbase hbck -j ./hbase-hbck2-2.0.2.jar assigns <table_name> As a last resort, try a full cleanup: # Stop HBase ./bin/stop-hbase.sh # Clear ZooKeeper data ./zkCli.sh -server localhost:2181 rmr /hbase # Remove the META directory rm -rf /hbase/data/hbase/meta # Start HBase ./bin/start-hbase.sh # Recreate the table structure hbase shell create '<table_name>', {NAME => 'cf'} # Adjust column families as needed If none of these steps work, we can try a more aggressive approach: Back up your data: hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot <snapshot_name> -copy-to hdfs://backup-cluster/hbase Try a clean META rebuild: # Stop HBase ./bin/stop-hbase.sh # Clear META rm -rf /hbase/data/default/hbase/meta # Start HBase in repair mode env HBASE_OPTS="-XX:+UseParNewGC -XX:+UseConcMarkSweepGC" ./bin/hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair # Start HBase normally ./bin/start-hbase.sh Additional troubleshooting tips: Check HBase logs for specific errors: tail -f /var/log/hbase/hbase-master.log Verify cluster health: hbase hbck -details Monitor region transitions: echo "scan 'hbase:meta', {COLUMNS => 'info:regioninfo'}" | hbase shell If you encounter any specific errors during these steps, please share them and I can provide more targeted solutions.
... View more
12-25-2024
03:57 PM
1 Kudo
@Roomka Looks an old post all the same I will try to answer and I hope its still relevant to others too The challenges you're facing with Apache NiFi's development life cycle stem from its design, which does not fully separate code/logic and environment-specific configurations. To address this and create a robust process for porting flows from dev to QA to prod, consider the following solutions: 1. Use Parameter Contexts for Configuration Management NiFi supports Parameter Contexts, which can be used to externalize environment-specific configurations. This allows you to separate logic from environment-specific details. Steps: Define Parameter Contexts for each environment (e.g., DevContext, QAContext, ProdContext). Externalize configurations like: Number of threads Cron schedules Database connection strings API endpoints When deploying to a new environment: Import the flow without the environment-specific Parameter Contexts. Assign the appropriate Parameter Context for that environment. 2. Use NiFi Registry for Flow Versioning and Promotion The NiFi Registry provides a way to version control your flows and manage deployments across environments. Steps: Set up NiFi Registry and connect your NiFi instances to it. Use the Registry to version your flows. Promote flows from dev to QA to prod by exporting/importing them through the Registry. In each environment, override parameters using the appropriate Parameter Context. 3. Handle Env.-Specific Differences with External Configuration Management If Parameter Contexts are insufficient, consider externalizing configurations entirely using tools like Consul, AWS Parameter Store, or environment variables. Steps: Store all environment-specific configurations in an external tool. Use a custom script or a NiFi processor to fetch configurations dynamically at runtime. This ensures the flow logic remains the same across environments, and only the external configurations vary. 4. Adopt Best Practices for Flow Design To reduce the impact of embedded environment-specific details, follow these design principles: Avoid hardcoding resource-intensive configurations like thread counts or cron schedules into the flow. Use NiFi Variables or Parameters wherever possible to make configurations flexible. Split flows into smaller, reusable components to reduce complexity and improve maintainability. 5. Use Deployment Automation Automate the deployment process to ensure consistency and reduce manual errors. Use tools like Ansible, Terraform, or Jenkins to automate flow deployment. Include steps to set up Parameter Contexts or fetch external configurations as part of the deployment pipeline. 6. Mitigating the 12-Factor Principles Concern While NiFi isn't designed to fully adhere to 12-factor principles, you can adapt your processes to bridge this gap: Codebase: Manage flow versions centrally using NiFi Registry. Config: Externalize environment-specific configurations using Parameter Contexts or external configuration management tools. Build, Release, Run: Standardize your flows and deployment pipeline across environments. Disposability: Test flows rigorously in QA to ensure they can handle unexpected failures gracefully. Hope these points give you a better picture and possibly an answer Happy Haddoping
... View more
12-22-2024
09:45 AM
1 Kudo
@rsurti The issue described suggests a mismatch or misconfiguration in the SAML integration with NiFi and NGINX. The following analysis and potential solutions should address your findings SAML Payload Issues: Empty Recipient Value: The Recipient in the SAML assertion should match the ACS (Assertion Consumer Service) URL configured in NiFi. If it is empty, this indicates a misconfiguration in the SAML IdP (OneLogin). Cookie and InResponseTo Mismatch: The InResponseTo attribute in the SAML response should correspond to the SAML request identifier issued by NiFi. If the cookie storing the SAML request ID is missing or mismatched, authentication fails. NiFi Error: SAML Authentication Request Identifier Cookie not found: This suggests that the browser is not sending back the SAML request ID cookie, or NiFi cannot recognize it. This could happen if: The cookie is not set or overwritten by NGINX. The cookie is being blocked or dropped due to cross-domain or SameSite restrictions. NGINX is misconfigured to handle or forward SAML cookies. Probable Causes NiFi Configuration: Misconfigured nifi.security.user.saml properties in nifi.properties. ACS URL mismatch between NiFi and OneLogin. NGINX Configuration: Improper handling of cookies, particularly the SAML request identifier cookie. Incorrect forwarding of headers or paths for SAML requests and responses. OneLogin Configuration: The SAML application in OneLogin is not configured to provide a valid Recipient or ACS URL. Mismatched SAML settings such as entity ID, ACS URL, or signature settings. Steps to Resolve 1. Verify and Update NiFi Configuration Ensure the nifi.properties file has the correct SAML configurations: nifi.security.user.saml.idp.metadata.url=<OneLogin SAML Metadata URL> nifi.security.user.saml.sp.entity.id=<NiFi Entity ID> nifi.security.user.saml.sp.base.url=https://<nifi-url> # Same as what users access nifi.security.user.saml.authentication.expiration=12 hours nifi.security.user.saml.request.identifier.name=nifi-request-id The nifi.security.user.saml.sp.base.url must match the Recipient value in the SAML response. 2. Check OneLogin SAML Connector Configuration Ensure the Recipient value in OneLogin matches the NiFi ACS URL: ACS URL: https://<nifi-url>/nifi-api/access/saml/login/consumer Verify that the SAML settings in OneLogin include: Audience (Entity ID): Matches nifi.security.user.saml.sp.entity.id. ACS URL: Matches nifi.security.user.saml.sp.base.url. 3. Debug and Adjust NGINX Configuration Ensure NGINX is not interfering with SAML cookies proxy_pass https://<nifi-host>:9444; proxy_set_header Host $host; proxy_set_header X-Forwarded-Proto https; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_cookie_path / "/; SameSite=None; Secure"; Add debug logging to check if cookies are being forwarded correctly. 4. Troubleshoot Cookie Handling Check the browser developer tools (under Application > Cookies) to verify that the SAML request identifier cookie is being set and returned. Ensure the SameSite=None and Secure flags are set for the cookies. 5. Check SAML Logs for Errors In the nifi-user.log file, look for logs that provide details on the failed SAML authentication, including: Missing cookies. InResponseTo mismatch. 6. Test the Flow After making the adjustments, perform the following: Clear browser cookies. Initiate the SAML login process from the NiFi GUI. Check if the Recipient and InResponseTo values align in the SAML assertion and request. Use a SAML debugging tool like SAML-tracer (browser extension) to inspect the SAML request/response flows before that enable debug Enable detailed logging in NiFi for SAML authentication by modifying logback.xml <logger name="org.apache.nifi.web.security.saml" level="DEBUG" /> Let me know if you need further assistance! Happy hadooping
... View more
12-22-2024
07:33 AM
1 Kudo
@Emery I think this should resolve your problem, change the nifi.web.https.host and shown below so it binds to all network interfaces, allowing access from other machines in your intranet nifi.web.https.host=ourMacMini20 --> nifi.web.https.host=0.0.0.0 Browser Trust for Self-Signed Certificates Problem: If you're using a self-signed certificate, browsers on other machines may block access or show warnings. Solution: Install the certificate from the NiFi server on the client machines' trusted certificate store. Alternatively, use a certificate from a trusted Certificate Authority (CA). Please let me know if that helped Happy hadooping
... View more