Member since
01-19-2017
3656
Posts
624
Kudos Received
365
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
379 | 12-22-2024 07:33 AM | |
238 | 12-18-2024 12:21 PM | |
253 | 12-18-2024 08:50 AM | |
1072 | 12-17-2024 07:48 AM | |
372 | 08-02-2024 08:15 AM |
02-12-2025
01:42 AM
@0tto Please could you share your NiFi logs. Happy hadooping
... View more
01-27-2025
01:18 AM
@ose_gold The SFTP issues appear to stem from incorrect permissions and ownership in your Docker setup. Here's the analysis and solution: Key Issues: Root ownership instead of user 'foo' for /home/foo Incorrect chroot directory permissions Docker volume mount permissions You have a volume nifi-conf:/opt/nifi/nifi-current/conf for the NiFi container, but it's not declared in the volumes section at the bottom check the addition below Docker Compose has newer versions 3.8 It might be a good idea to update to a more recent version depending on the features you need. version: '3' #Docker Compose has newer versions 3.8
services:
nifi:
image: apache/nifi:latest # Consider specifying a version
container_name: nifi
ports:
- "8089:8443"
- "5656:5656"
volumes:
- nifi-conf:/opt/nifi/nifi-current/conf
environment:
NIFI_WEB_PROXY_HOST: localhost:8089
SINGLE_USER_CREDENTIALS_USERNAME: admin
SINGLE_USER_CREDENTIALS_PASSWORD: {xxyourpasswdxx}
sftp:
image: atmoz/sftp
volumes:
- ./sftp/upload:/home/foo/upload
ports:
- "2222:22"
command: foo:pass:1001
# Add these permissions
user: "1001:1001"
environment:
- CHOWN_USERS=foo
- CHOWN_DIRS=/home/foo
volumes:
nifi-conf: Before starting containers # Set correct permissions on host
mkdir -p ./sftp/upload
chown -R 1001:1001 ./sftp/upload
chmod 755 ./sftp/upload This configuration: Sets proper user/group ownership Maintains correct chroot permissions Ensures volume mount permissions are preserved Prevents permission conflicts between host and container Happy Hadooping
... View more
01-20-2025
05:41 AM
@rsurti If SAML authentication works for LAN users but not for users on Wi-Fi, even when both are on the same network, it suggests differences in how the network or devices are configured for each connection type. Here’s how you can troubleshoot and resolve this issue: DNS Resolution: Check if Wi-Fi users can resolve the identity provider (IdP) and service provider (SP) URLs correctly. nslookup idp.example.com Capture SAML requests/responses using browser developer tools (Network tab). Look for differences in: Redirect URLs Assertions Error codes Common issues include: Misconfigured callback URLs. Device Configuration Check if device firewalls or VPNs are interfering with SAML traffic over Wi-Fi. Ensure browser settings (e.g., cookie policies) do not block SAML cookies. Connect a user to LAN and Wi-Fi simultaneously (if possible) to identify differences in routing or access. Please revert Happy hadooping
... View more
01-06-2025
12:32 AM
@spiker According to the below zk parameter you have set it to false meaning you are using external and not embedded zk is that the case? # Specifies whether or not this instance of NiFi should run an embedded ZooKeeper server nifi.state.management.embedded.zookeeper.start=false Yet the below zk config seems contradictory # zookeeper properties, used for cluster management # nifi.zookeeper.connect.string=zookeeper:2181 # Zookeeper should resolve to correct host(s) for the Zookeeper ensemble Check this documentation for setting external zookeepers If you are using embedded zk the adjust the following entries in your nifi.properties nifi.state.management.embedded.zookeeper.start=true
nifi.zookeeper.connect.string=IP01:2181,IP02:2181,IP03:2181
nifi.zookeeper.auth.type=default
nifi.remote.input.host=IP01 # Localhost ip
nifi.remote.input.secure=false
nifi.remote.input.socket.port=9998
nifi.remote.input.http.enabled=true # set true if you want http
nifi.cluster.is.node=true
nifi.cluster.node.address=IP01 # Localhost ip
nifi.cluster.node.protocol.port=7474
nifi.web.http.host=IP01 # Localhost ip. use either https or http
nifi.web.http.port=8443
nifi.cluster.load.balance.port=6342 zookeeper.properties This file contains additional info to be used by zookeeper to know about the servers. server.1=IP01:2888:3888
server.2=IP02:2888:3888
server.3=IP03:2888:3888
clientPort=2181 In order to maintain the nifi state across instances,you need to modify the state-management.xml and provide a new state provider pointing to zookeeper. <cluster-provider>
<id>zk-provider</id> <class>org.apache.nifi.controller.state.providers.zookeeper.ZooKeeperStateProvider</class>
<property name="Connect String">ip1:2181,ip2:2181,ip3:2181</property>
<property name="Root Node">/nifi</property>
<property name="Session Timeout">10 seconds</property>
<property name="Access Control">Open</property>
</cluster-provider> Here the Access Control has been set to open to be able to login without an username/pass but you should configure your to use your oidc-provider I would assume . OpenId Connect SSO Properties Does the user email exist in the OIDC token and is accessible check the Azure AD and confirm the token contains the expected email and upn nifi.security.user.oidc.claim.identifying.user={email}
nifi.security.user.oidc.fallback.claims.identifying.user=upn Can you ensure the OPENID discovery URL is reachable from the NiFi nodes and resolves correctly run the below curl to confirm connectivity curl -v https://login.microsoftonline.com/XXXXXXXXXXXXXXXXXXXXXXX/v2.0/.well-known/openid-configuration Validate HTTPS and OIDC curl -vk https://<nifi-node>:8443/nifi-api/
curl -vk https://<nifi-node>:8443/nifi-api/access/oidc/callback Clear Cache Stop NiFi on all nodes and clear state directory ./conf/state-management.xml and restart the cluster Hope that helps Happy hadooping
... View more
01-04-2025
06:13 AM
@spiker Can you quickly do the below steps and revert 1. Stop NiFi ./bin/nifi.sh stop 2. Backup configuration files cp conf/authorizations.xml conf/authorizations.xml.backup
cp conf/users.xml conf/users.xml.backup 3. Clear login identity provider cache rm -rf ./state/local/login-identity-providers/ 4. Verify file permissions chown -R nifi:nifi ./conf/
chmod 660 conf/authorizations.xml
chmod 660 conf/users.xml 5. Start NiFi ./bin/nifi.sh start 6. Check Logs for Additional Details tail -f logs/nifi-app.log If these steps don't resolve the issue, please check and share the full stack trace from nifi-app.log Happy hadooping
... View more
01-03-2025
10:15 AM
1 Kudo
@spiker Can you also share your nifi.properties?
... View more
01-03-2025
07:18 AM
@spiker Can you share the logs?
... View more
01-03-2025
02:20 AM
1 Kudo
@spiker The error you're encountering in Apache NiFi logs suggests a configuration issue related to OpenID Connect (OIDC) authentication and proxy settings within your Kubernetes environment. Initial Request → Authentication Failed → JSON Parse Error → Internal Server Error : "Untrusted proxy" Here is a beautiful document that you should go through for maybe a eureka moment Securing NiFi with Existing CA Certificates Ensure that NiFi trusts the proxy making the request. nifi.security.whitelisted.proxy.hostnames=172\.24\.0\.3 Check the SSL certificates used by NiFi and ensure the truststore is correctly configured. nifi.security.truststore=/path/to/truststore.jks nifi.security.truststoreType=JKS nifi.security.proxy.enabled=true Check NiFi permission chown -R nifi:nifi /path/to/truststore.jks chmod 640 /path/to/truststore.jks # Import the proxy certificate into NiFi's truststore keytool -import -alias proxy-cert -file proxy.crt -keystore truststore.jks Verify truststore contains proxy certificate keytool -list -v -keystore truststore.jks Ensure that OIDC is properly set up in the nifi.properties file add the following properties nifi.security.user.oidc.redirect.url=https://<nifi-host>:8443/nifi-api/access/oidc/callback nifi.security.user.login.identity.provider=oidc-provider Validate Kubernetes Ingress and Service as you are using Kubernetes ingress or a service, ensure headers and SSL information are properly forwarded. Enable detailed logging in NiFi add this in the logback.xml to identify specific issues with headers or tokens org.apache.nifi.web.security.level=DEBUG Restart NiFi ./bin/nifi.sh restart Testing with Curl to simulate an API call to validate the request flow curl -k -H "Authorization: Bearer <your-access-token>" https://<nifi-host>:8443/nifi-api/flow/current-user Note: Please verify that the Kubernetes environment has the necessary DNS resolution, network connectivity, and the correct OpenID Connect metadata URL. Happy hadooping
... View more
01-02-2025
10:59 AM
@tuyen123 If you have installed other applications or dependencies for Spark, Hive, etc. that use a different version of protobuf, the conflict can cause issues with the block report. Locate conflicting protobuf JARs find $HADOOP_HOME -name "protobuf*.jar" Check if there are multiple versions present in $HADOOP_HOME/lib or other dependency paths. Remove conflicting jars Keep only the protobuf JAR version that matches your Hadoop distribution e.g.protobuf-java-2.5.0.jar Alternatively, explicitly set the protobuf version in your CLASSPATH. If third-party libraries are included in your Hadoop environment, they might override the correct protobuf version. Open $HADOOP_HOME/etc/hadoop/hadoop-env.sh and prepend the correct protobuf library: export HADOOP_CLASSPATH=/path/to/protobuf-java-2.5.0.jar:$HADOOP_CLASSPATH Verify Classpath hadoop classpath | grep protobuf Ensure it includes the correct protobuf JAR. Please try that and revert. Happy hadooping
... View more