Member since
01-19-2017
3670
Posts
626
Kudos Received
368
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
286 | 03-05-2025 01:34 PM | |
190 | 03-03-2025 01:09 PM | |
158 | 03-02-2025 07:19 AM | |
578 | 12-22-2024 07:33 AM | |
366 | 12-18-2024 12:21 PM |
03-03-2025
01:09 PM
@pavanshettyg5 But looking at the error messages when running zkServer.sh status on the ZooKeeper nodes: on zookeepernode1 and 2, there's a message: "Client port not found in static config file. Looking in dynamic config file." Then a grep error: "grep: : No such file or directory". This suggests that the static zoo.cfg is missing the clientPort entry, and the dynamic configuration file (which is probably specified via dynamicConfigFile in zoo.cfg) is either not present or misconfigured. To resolve the ZooKeeper and NiFi connectivity issues, follow these steps and hopefully that will resolve your nifi connectivity issue Step 1: Configure ZooKeeper to Bind to All Interfaces Problem: ZooKeeper nodes are binding to localhost, preventing remote connections from NiFi. Fix: Update zoo.cfg on each ZooKeeper node to bind to 0..0.0.0 (all interfaces). 1. Edit zoo.cfg on each ZooKeeper node: vi /opt/zookeeper/conf/zoo.cfg 2. Add/Modify these lines clientPort=2181 clientPortAddress=0.0.0.0 3. Restart ZooKeeper on each node: /opt/zookeeper/bin/zkServer.sh restart Step 2: Verify ZooKeeper Configuration After restarting, check the status /opt/zookeeper/bin/zkServer.sh status Expected Output: Client address: 0.0.0.0(not localhost). One node should beleader, others follower. Step 3: Check ZooKeeper Network Connectivity From NiFi nodes, test connectivity to ZooKeeper: telnet zookeepernode1 2181 telnet zookeepernode2 2181 telnet zookeepernode3 2181 If connections fail, check firewalls/security groups to allow traffic on port 2181. Step 4: Validate ZooKeeper Dynamic Configuration (If Applicable) If using dynamic reconfiguration: 1. Ensure the dynamic config file (e.g., zoo_dynamic.cfg) has entries like server.1=zookeepernode1:2888:3888:participant;zookeepernode1:2181 server.2=zookeepernode2:2888:3888:participant;zookeepernode2:2181 server.3=zookeepernode3:2888:3888:participant;zookeepernode3:2181 2. Confirm the static zoo.cfg references the dynamic file: dynamicConfigFile=/opt/zookeeper/conf/zoo_dynamic.cfg Step 5: Verify NiFi Configuration Ensure nifi.properties points to the correct ZooKeeper ensemble: nifi.zookeeper.connect.string=zookeepernode1:2181,zookeepernode2:2181,zookeepernode3:2181 Step 6: Restart NiFi Services Restart NiFi on all nodes: /opt/nifi/bin/nifi.sh restart Check logs for successful connections: tail -f /opt/nifi/logs/nifi-app.log Troubleshooting Summary ZooKeeper Binding: Ensure ZooKeeper listens on 0.0.0.0:2181, not localhost. Firewall Rules: Allow traffic between NiFi and ZooKeeper nodes on ports 2181, 2888, and 3888. Hostname Resolution: Confirm zookeepernode1, zookeepernode2, and zookeepernode3 resolve to correct IPs on NiFi nodes. By addressing ZooKeeper's binding configuration and network accessibility, NiFi should successfully connect to the ZooKeeper cluster. Happy hadooping
... View more
03-02-2025
07:19 AM
@drewski7 The error message says there's no EntityManager with an actual transaction available. That suggests that the code trying to persist the user isn't running within a transactional context. In Spring applications, methods that modify the database usually need to be annotated with `@Transactional` to ensure they run within a transaction. Looking at the stack trace, the error occurs in `XUserMgr$ExternalUserCreator.createExternalUser`, which calls `UserMgr.createUser`, which in turn uses `BaseDao.create`. The `create` method in `BaseDao` is trying to persist an entity but there's no active transaction. So maybe the `createUser` method or the code calling it isn't properly transactional. In version 2.4.0, this worked, so something must have changed in 2.5.0. Perhaps the upgrade introduced changes in how transactions are managed. Maybe a method that was previously transactional no longer is, or the transaction boundaries have shifted. Step 1: Verify Database Schema Compatibility Ranger 2.5.0 may require schema updates. Ensure the database schema is compatible with the new version: Check Upgrade Documentation: Review the Ranger 2.5.0 Release Notes for required schema changes. Example: If migrating from 2.4.0 to 2.5.0, you may need to run SQL scripts like x_portal_user_DDL.sql or apache-ranger-2.5.0-schema-upgrade.sql. Run Schema Upgrade Scripts: Locate the schema upgrade scripts in the Ranger installation directory (ranger-admin/db/mysql/patches) and apply them: mysql -u root -p ranger < apache-ranger-2.5.0-schema-upgrade.sql Validate the Schema: Confirm that the x_portal_user table exists and has the expected columns (e.g., login_id, user_role). Step 2: Check Transaction Management Configuration The error suggests a missing @Transactional annotation or misconfigured transaction manager in Ranger 2.5.0: Review Code/Configuration Changes: Compare the transaction management configurations between Ranger 2.4.0 and 2.5.0. Key files: ranger-admin/ews/webapp/WEB-INF/classes/conf/application.properties ranger-admin/ews/webapp/WEB-INF/classes/spring-beans.xml Apache Ranger JIRA: Search for issues like RANGER-XXXX related to transaction management in Ranger 2.5.0. Ensure Transactional Annotations: In Ranger 2.5.0, the method createUser in UserMgr.java or its caller must be annotated with @Transactional to ensure database operations run in a transaction. @Transactional public void createUser(...) { ... } 3. Debug Transaction Boundaries: Enable transaction logging in log4j.properties to trace transaction activity log4j.logger.org.springframework.transaction=DEBUG log4j.logger.org.springframework.orm.jpa=DEBUG Step 3: Manually Create the User (Temporary Workaround) If the user drew.nicolette is missing from x_portal_user, manually insert it into the database: INSERT INTO x_portal_user (login_id, password, user_role, status) VALUES ('drew.nicolette', 'LDAP_USER_PASSWORD_HASH_IF_APPLICABLE', 'ROLE_USER', 1); Note: This bypasses the transaction error but is not a permanent fix. Step 4: Verify LDAP Configuration Ensure LDAP settings in ranger-admin/ews/webapp/WEB-INF/classes/conf/ranger-admin-site.xml are correct for Ranger 2.5.0: <property>
<name>ranger.authentication.method</name>
<value>LDAP</value>
</property>
<property>
<name>ranger.ldap.url</name>
<value>ldap://your-ldap-server:389</value>
</property> Step 5: Check for Known Issues Apache Ranger JIRA: Search for issues like RANGER-XXXX related to transaction management in Ranger 2.5.0. 2. Apply Patches: If a patch exists (e.g., for missing @Transactional annotations), apply it to the Ranger 2.5.0 codebase. Step 6: Test with a New User Attempt to log in with a different LDAP user to see if the issue is specific to drew.nicolette or a systemic problem. If the error persists for all users, focus on transaction configuration or schema issues. If only drew.nicolette fails, check for conflicts in the x_portal_user table (e.g., duplicate entries). Final Checks Logs: Monitor ranger-admin.log and catalina.out for transaction-related errors after applying fixes. Permissions: Ensure the database user has write access to the x_portal_user table. Dependencies: Confirm that Spring and JPA library versions match Ranger 2.5.0 requirements. Happy hadooping
... View more
03-02-2025
02:31 AM
@rj27 Some clarification on the git setup This is the author name that will appear in commit messages Set the global Git username to "NiFi Registry" This is the email address associated with commits Set the global Git email to "nifi-registry@example.com" Values to be passed <flowPersistenceProvider> <property name="Flow Storage Directory">./flow_storage</property> <property name="Git Remote To Push">origin</property> <property name="Git Remote Access User">username</property> <property name="Git Remote Access Password">password</property> <property name="Remote Clone Repository">https://git-repo-url/your-flow-repo.git</property> </flowPersistenceProvider>
... View more
03-01-2025
11:46 AM
@rj27 To set up Git integration for Apache NiFi Registry using SSH authentication, you need to configure the NiFi Registry to use a Git-based flow persistence provider. Analysis of Current Setup You have Apache NiFi 1.28 running on AWS ECS Fargate You have Apache NiFi Registry 1.28 running on AWS ECS Fargate Both applications are communicating with each other successfully You need to integrate NiFi Registry with Git using SSH authentication Below are the detailed steps to achieve this on an AWS ECS instance running on Fargate with NiFi and NiFi Registry 1.28. Detailed Steps for Git Integration Step 1: Update NiFi Registry Configuration Modify the nifi-registry.properties file in your container Add the following properties to configure the Git flow persistence provider # Git Configuration nifi.registry.db.git.remote=true nifi.registry.db.git.remote.to.push=true nifi.registry.db.git.repository=/opt/nifi-registry/git-repository nifi.registry.db.git.flow.storage.directory=/opt/nifi-registry/flow-storage nifi.registry.db.git.remote.url=ssh://git@your-git-server:port/your-repo.git nifi.registry.db.git.remote.branch=master Step 2: Set Up SSH Keys for Authentication 1. Generate an SSH key pair inside your container mkdir -p /opt/nifi-registry/.ssh ssh-keygen -t rsa -b 4096 -C "nifi-registry@example.com" -f /opt/nifi-registry/.ssh/id_rsa -N "" 2. Add your public key to your Git repository's authorized keys (in GitHub, GitLab, etc.) Copy the contents of /opt/nifi-registry/.ssh/id_rsa.pub Add it to your Git provider as a deploy key or authentication key 3. Configure SSH client in the container cat > /opt/nifi-registry/.ssh/config << EOF Host your-git-server IdentityFile /opt/nifi-registry/.ssh/id_rsa StrictHostKeyChecking no UserKnownHostsFile /dev/null EOF 4. Set proper permissions chmod 700 /opt/nifi-registry/.ssh chmod 600 /opt/nifi-registry/.ssh/id_rsa chmod 644 /opt/nifi-registry/.ssh/id_rsa.pub chmod 600 /opt/nifi-registry/.ssh/config Step 3: Update ECS Task Definition for Persistence 1. Update your ECS task definition to include a volume for SSH keys and Git repository validate the JSON's "volumes": [ { "name": "nifi-registry-git", "dockerVolumeConfiguration": { "scope": "task", "driver": "local", "labels": null, "autoprovision": true } } ] 2. Mount this volume in your container definition "mountPoints": [ { "sourceVolume": "nifi-registry-git", "containerPath": "/opt/nifi-registry/.ssh", "readOnly": false }, { "sourceVolume": "nifi-registry-git", "containerPath": "/opt/nifi-registry/git-repository", "readOnly": false } ] Step 4: Configure Git User Information Set Git user configuration git config --global user.name "NiFi Registry" git config --global user.email "nifi-registry@example.com" Step 5: Initialize the Git Repository Initialize the local Git repository mkdir -p /opt/nifi-registry/git-repository cd /opt/nifi-registry/git-repository git init git remote add origin ssh://git@your-git-server:port/your-repository.git 2. Test the connection ssh -T git@your-git-server Step 6: Configure NiFi to Connect to NiFi Registry In NiFi UI, configure the Registry Client: Click on the hamburger menu (≡) in the top-right corner Select "Controller Settings" Go to the "Registry Clients" tab Add a new Registry Client with: Name: Git-Backed Registry URL: http://your-nifi-registry:18080 Step 7: Restart NiFi Registry Restart the NiFi Registry service to apply change # If using systemd systemctl restart nifi-registry # If using the command line ./bin/nifi-registry.sh restart # In AWS ECS, update the service to force new deployment aws ecs update-service --cluster your-cluster --service your-nifi-registry-service --force-new-deployment Troubleshooting 1. Check NiFi Registry logs for Git-related errors: tail -f /opt/nifi-registry/logs/nifi-registry-app.log 2. Verify SSH connectivity ssh -vT git@your-git-server 3. Common issues: Permission problems: Ensure the NiFi Registry user has appropriate permissions Known hosts: If StrictHostKeyChecking is on, you need to accept the host key first Firewall: Ensure outbound connections to the Git server are allowed from the ECS task Important precautions Security: Ensure the private key is stored securely and not exposed in the container image or logs. Automation: Consider using AWS Secrets Manager or Parameter Store to manage the SSH key and passphrase securely. Backup: Regularly back up your Git repository to avoid data loss. Happy hadooping
... View more
02-12-2025
01:42 AM
@0tto Please could you share your NiFi logs. Happy hadooping
... View more
01-27-2025
01:18 AM
@ose_gold The SFTP issues appear to stem from incorrect permissions and ownership in your Docker setup. Here's the analysis and solution: Key Issues: Root ownership instead of user 'foo' for /home/foo Incorrect chroot directory permissions Docker volume mount permissions You have a volume nifi-conf:/opt/nifi/nifi-current/conf for the NiFi container, but it's not declared in the volumes section at the bottom check the addition below Docker Compose has newer versions 3.8 It might be a good idea to update to a more recent version depending on the features you need. version: '3' #Docker Compose has newer versions 3.8
services:
nifi:
image: apache/nifi:latest # Consider specifying a version
container_name: nifi
ports:
- "8089:8443"
- "5656:5656"
volumes:
- nifi-conf:/opt/nifi/nifi-current/conf
environment:
NIFI_WEB_PROXY_HOST: localhost:8089
SINGLE_USER_CREDENTIALS_USERNAME: admin
SINGLE_USER_CREDENTIALS_PASSWORD: {xxyourpasswdxx}
sftp:
image: atmoz/sftp
volumes:
- ./sftp/upload:/home/foo/upload
ports:
- "2222:22"
command: foo:pass:1001
# Add these permissions
user: "1001:1001"
environment:
- CHOWN_USERS=foo
- CHOWN_DIRS=/home/foo
volumes:
nifi-conf: Before starting containers # Set correct permissions on host
mkdir -p ./sftp/upload
chown -R 1001:1001 ./sftp/upload
chmod 755 ./sftp/upload This configuration: Sets proper user/group ownership Maintains correct chroot permissions Ensures volume mount permissions are preserved Prevents permission conflicts between host and container Happy Hadooping
... View more
01-20-2025
05:41 AM
@rsurti If SAML authentication works for LAN users but not for users on Wi-Fi, even when both are on the same network, it suggests differences in how the network or devices are configured for each connection type. Here’s how you can troubleshoot and resolve this issue: DNS Resolution: Check if Wi-Fi users can resolve the identity provider (IdP) and service provider (SP) URLs correctly. nslookup idp.example.com Capture SAML requests/responses using browser developer tools (Network tab). Look for differences in: Redirect URLs Assertions Error codes Common issues include: Misconfigured callback URLs. Device Configuration Check if device firewalls or VPNs are interfering with SAML traffic over Wi-Fi. Ensure browser settings (e.g., cookie policies) do not block SAML cookies. Connect a user to LAN and Wi-Fi simultaneously (if possible) to identify differences in routing or access. Please revert Happy hadooping
... View more
01-06-2025
12:32 AM
@spiker According to the below zk parameter you have set it to false meaning you are using external and not embedded zk is that the case? # Specifies whether or not this instance of NiFi should run an embedded ZooKeeper server nifi.state.management.embedded.zookeeper.start=false Yet the below zk config seems contradictory # zookeeper properties, used for cluster management # nifi.zookeeper.connect.string=zookeeper:2181 # Zookeeper should resolve to correct host(s) for the Zookeeper ensemble Check this documentation for setting external zookeepers If you are using embedded zk the adjust the following entries in your nifi.properties nifi.state.management.embedded.zookeeper.start=true
nifi.zookeeper.connect.string=IP01:2181,IP02:2181,IP03:2181
nifi.zookeeper.auth.type=default
nifi.remote.input.host=IP01 # Localhost ip
nifi.remote.input.secure=false
nifi.remote.input.socket.port=9998
nifi.remote.input.http.enabled=true # set true if you want http
nifi.cluster.is.node=true
nifi.cluster.node.address=IP01 # Localhost ip
nifi.cluster.node.protocol.port=7474
nifi.web.http.host=IP01 # Localhost ip. use either https or http
nifi.web.http.port=8443
nifi.cluster.load.balance.port=6342 zookeeper.properties This file contains additional info to be used by zookeeper to know about the servers. server.1=IP01:2888:3888
server.2=IP02:2888:3888
server.3=IP03:2888:3888
clientPort=2181 In order to maintain the nifi state across instances,you need to modify the state-management.xml and provide a new state provider pointing to zookeeper. <cluster-provider>
<id>zk-provider</id> <class>org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider</class>
<property name="Connect String">ip1:2181,ip2:2181,ip3:2181</property>
<property name="Root Node">/nifi</property>
<property name="Session Timeout">10 seconds</property>
<property name="Access Control">Open</property>
</cluster-provider> Here the Access Control has been set to open to be able to login without an username/pass but you should configure your to use your oidc-provider I would assume . OpenId Connect SSO Properties Does the user email exist in the OIDC token and is accessible check the Azure AD and confirm the token contains the expected email and upn nifi.security.user.oidc.claim.identifying.user={email}
nifi.security.user.oidc.fallback.claims.identifying.user=upn Can you ensure the OPENID discovery URL is reachable from the NiFi nodes and resolves correctly run the below curl to confirm connectivity curl -v https://login.microsoftonline.com/XXXXXXXXXXXXXXXXXXXXXXX/v2.0/.well-known/openid-configuration Validate HTTPS and OIDC curl -vk https://<nifi-node>:8443/nifi-api/
curl -vk https://<nifi-node>:8443/nifi-api/access/oidc/callback Clear Cache Stop NiFi on all nodes and clear state directory ./conf/state-management.xml and restart the cluster Hope that helps Happy hadooping
... View more
01-04-2025
06:13 AM
@spiker Can you quickly do the below steps and revert 1. Stop NiFi ./bin/nifi.sh stop 2. Backup configuration files cp conf/authorizations.xml conf/authorizations.xml.backup
cp conf/users.xml conf/users.xml.backup 3. Clear login identity provider cache rm -rf ./state/local/login-identity-providers/ 4. Verify file permissions chown -R nifi:nifi ./conf/
chmod 660 conf/authorizations.xml
chmod 660 conf/users.xml 5. Start NiFi ./bin/nifi.sh start 6. Check Logs for Additional Details tail -f logs/nifi-app.log If these steps don't resolve the issue, please check and share the full stack trace from nifi-app.log Happy hadooping
... View more