Member since
01-19-2017
3656
Posts
624
Kudos Received
365
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
377 | 12-22-2024 07:33 AM | |
237 | 12-18-2024 12:21 PM | |
249 | 12-18-2024 08:50 AM | |
1055 | 12-17-2024 07:48 AM | |
372 | 08-02-2024 08:15 AM |
02-12-2025
01:42 AM
@0tto Please could you share your NiFi logs. Happy hadooping
... View more
02-04-2025
06:35 AM
Check beeline console output and HS2 logs to identify where it gets stuck and act accordingly.
... View more
01-27-2025
01:18 AM
@ose_gold The SFTP issues appear to stem from incorrect permissions and ownership in your Docker setup. Here's the analysis and solution: Key Issues: Root ownership instead of user 'foo' for /home/foo Incorrect chroot directory permissions Docker volume mount permissions You have a volume nifi-conf:/opt/nifi/nifi-current/conf for the NiFi container, but it's not declared in the volumes section at the bottom check the addition below Docker Compose has newer versions 3.8 It might be a good idea to update to a more recent version depending on the features you need. version: '3' #Docker Compose has newer versions 3.8
services:
nifi:
image: apache/nifi:latest # Consider specifying a version
container_name: nifi
ports:
- "8089:8443"
- "5656:5656"
volumes:
- nifi-conf:/opt/nifi/nifi-current/conf
environment:
NIFI_WEB_PROXY_HOST: localhost:8089
SINGLE_USER_CREDENTIALS_USERNAME: admin
SINGLE_USER_CREDENTIALS_PASSWORD: {xxyourpasswdxx}
sftp:
image: atmoz/sftp
volumes:
- ./sftp/upload:/home/foo/upload
ports:
- "2222:22"
command: foo:pass:1001
# Add these permissions
user: "1001:1001"
environment:
- CHOWN_USERS=foo
- CHOWN_DIRS=/home/foo
volumes:
nifi-conf: Before starting containers # Set correct permissions on host
mkdir -p ./sftp/upload
chown -R 1001:1001 ./sftp/upload
chmod 755 ./sftp/upload This configuration: Sets proper user/group ownership Maintains correct chroot permissions Ensures volume mount permissions are preserved Prevents permission conflicts between host and container Happy Hadooping
... View more
01-23-2025
09:15 AM
Hello @polingsky202 , i'm facing the same problem and had the same errors on logs implementing HAproxy with 3 brokers. Have you solved this issue ? Thank you for your help. Best regards.
... View more
01-20-2025
05:41 AM
@rsurti If SAML authentication works for LAN users but not for users on Wi-Fi, even when both are on the same network, it suggests differences in how the network or devices are configured for each connection type. Here’s how you can troubleshoot and resolve this issue: DNS Resolution: Check if Wi-Fi users can resolve the identity provider (IdP) and service provider (SP) URLs correctly. nslookup idp.example.com Capture SAML requests/responses using browser developer tools (Network tab). Look for differences in: Redirect URLs Assertions Error codes Common issues include: Misconfigured callback URLs. Device Configuration Check if device firewalls or VPNs are interfering with SAML traffic over Wi-Fi. Ensure browser settings (e.g., cookie policies) do not block SAML cookies. Connect a user to LAN and Wi-Fi simultaneously (if possible) to identify differences in routing or access. Please revert Happy hadooping
... View more
01-09-2025
03:09 AM
ZK 3.8.4 uses the LogBack feature for logging, which uses 2 libraries logback-core-1.2.13.jar logback-classic-1.2.13.jar ( Missing Jar ) one of them was missing from my bundle. I downloaded and copied the jar in zookeeper/lib/ dir and restarted the service. This worked for me. Steps - Locate the logback jar, download the other missing jar, and paste in that dir. cd /opt/ wget https://repo1.maven.org/maven2/ch/qos/logback/logback-classic/1.2.13/logback-classic-1.2.13.jar cksum logback-classic-1.2.13.jar | grep -i "103870831 232073" chown root:root logback-classic-1.2.13.jar cp logback-classic-1.2.13.jar /usr/odp/3.3.6.0-1/zookeeper/lib/ cp logback-classic-1.2.13.jar /usr/odp/3.3.6.0-1/cruise-control3/dependant-libs/
... View more
01-07-2025
11:23 PM
1 Kudo
Solved: Simply replacing the Java version with 11 resolves the issue. It's crucial to check the Java version of the client that initialized the data into the cluster. This is the key point.
... View more
01-06-2025
08:15 AM
@Shelton / @MattWho , My NIFI is behind corporate proxy, because of that In production, NIFI is not able to hit the azure OIDC discovery url. could you please help me on it ? Thanks, spiker
... View more
12-31-2024
09:47 AM
2 Kudos
@MrNicen This is a very common problem where the table gets stuck in a DISABLING state. First, please try these series of diagnostic and repair steps: First, verify the current state: echo "scan 'hbase:meta'" | hbase shell Try to force the table state change using HBCK2: # Set table to ENABLED state hbase hbck -j ./hbase-hbck2-2.0.2.jar setTableState <table_name> ENABLED # Download HBCK2 if not already present wget https://repository.apache.org/content/repositories/releases/org/apache/hbase/hbase-hbck2/2.0.2/hbase-hbck2-2.0.2.jar If that doesn't work, try cleaning the znode: # Connect to ZooKeeper ./zkCli.sh -server localhost:2181 # Check the table znode ls /hbase/table/<table_name> # Delete the table znode if present rmr /hbase/table/<table_name> If the issue persists, try manually updating the meta table: hbase shell # Disable table disable '<table_name>' # Wait a few seconds, then enable enable '<table_name>' # If that fails, try force disable disable_all '<table_name>' If still stuck, try these repair commands: # Clear the META table state echo "put 'hbase:meta', '<table_name>', 'table:state', '\x08\x00'" | hbase shell # Recreate the regions hbase hbck -j ./hbase-hbck2-2.0.2.jar assigns <table_name> As a last resort, try a full cleanup: # Stop HBase ./bin/stop-hbase.sh # Clear ZooKeeper data ./zkCli.sh -server localhost:2181 rmr /hbase # Remove the META directory rm -rf /hbase/data/hbase/meta # Start HBase ./bin/start-hbase.sh # Recreate the table structure hbase shell create '<table_name>', {NAME => 'cf'} # Adjust column families as needed If none of these steps work, we can try a more aggressive approach: Back up your data: hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot <snapshot_name> -copy-to hdfs://backup-cluster/hbase Try a clean META rebuild: # Stop HBase ./bin/stop-hbase.sh # Clear META rm -rf /hbase/data/default/hbase/meta # Start HBase in repair mode env HBASE_OPTS="-XX:+UseParNewGC -XX:+UseConcMarkSweepGC" ./bin/hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair # Start HBase normally ./bin/start-hbase.sh Additional troubleshooting tips: Check HBase logs for specific errors: tail -f /var/log/hbase/hbase-master.log Verify cluster health: hbase hbck -details Monitor region transitions: echo "scan 'hbase:meta', {COLUMNS => 'info:regioninfo'}" | hbase shell If you encounter any specific errors during these steps, please share them and I can provide more targeted solutions.
... View more
12-25-2024
09:50 PM
1 Kudo
Hi @Shelton sorry for late reply: Here is my json response 400: Here is invoke HTTP: I have used below properties in invokhttp like below: Always Output Response : Set to true. Output Response Attributes: 400 and 500 Here is RouteOnAttribute: Is it like that?
... View more