Member since
01-19-2017
3655
Posts
624
Kudos Received
364
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
346 | 12-22-2024 07:33 AM | |
216 | 12-18-2024 12:21 PM | |
955 | 12-17-2024 07:48 AM | |
354 | 08-02-2024 08:15 AM | |
3718 | 04-06-2023 12:49 PM |
02-04-2025
06:35 AM
Check beeline console output and HS2 logs to identify where it gets stuck and act accordingly.
... View more
01-27-2025
01:18 AM
@ose_gold The SFTP issues appear to stem from incorrect permissions and ownership in your Docker setup. Here's the analysis and solution: Key Issues: Root ownership instead of user 'foo' for /home/foo Incorrect chroot directory permissions Docker volume mount permissions You have a volume nifi-conf:/opt/nifi/nifi-current/conf for the NiFi container, but it's not declared in the volumes section at the bottom check the addition below Docker Compose has newer versions 3.8 It might be a good idea to update to a more recent version depending on the features you need. version: '3' #Docker Compose has newer versions 3.8
services:
nifi:
image: apache/nifi:latest # Consider specifying a version
container_name: nifi
ports:
- "8089:8443"
- "5656:5656"
volumes:
- nifi-conf:/opt/nifi/nifi-current/conf
environment:
NIFI_WEB_PROXY_HOST: localhost:8089
SINGLE_USER_CREDENTIALS_USERNAME: admin
SINGLE_USER_CREDENTIALS_PASSWORD: {xxyourpasswdxx}
sftp:
image: atmoz/sftp
volumes:
- ./sftp/upload:/home/foo/upload
ports:
- "2222:22"
command: foo:pass:1001
# Add these permissions
user: "1001:1001"
environment:
- CHOWN_USERS=foo
- CHOWN_DIRS=/home/foo
volumes:
nifi-conf: Before starting containers # Set correct permissions on host
mkdir -p ./sftp/upload
chown -R 1001:1001 ./sftp/upload
chmod 755 ./sftp/upload This configuration: Sets proper user/group ownership Maintains correct chroot permissions Ensures volume mount permissions are preserved Prevents permission conflicts between host and container Happy Hadooping
... View more
01-23-2025
09:15 AM
Hello @polingsky202 , i'm facing the same problem and had the same errors on logs implementing HAproxy with 3 brokers. Have you solved this issue ? Thank you for your help. Best regards.
... View more
01-20-2025
05:41 AM
@rsurti If SAML authentication works for LAN users but not for users on Wi-Fi, even when both are on the same network, it suggests differences in how the network or devices are configured for each connection type. Here’s how you can troubleshoot and resolve this issue: DNS Resolution: Check if Wi-Fi users can resolve the identity provider (IdP) and service provider (SP) URLs correctly. nslookup idp.example.com Capture SAML requests/responses using browser developer tools (Network tab). Look for differences in: Redirect URLs Assertions Error codes Common issues include: Misconfigured callback URLs. Device Configuration Check if device firewalls or VPNs are interfering with SAML traffic over Wi-Fi. Ensure browser settings (e.g., cookie policies) do not block SAML cookies. Connect a user to LAN and Wi-Fi simultaneously (if possible) to identify differences in routing or access. Please revert Happy hadooping
... View more
01-09-2025
03:09 AM
ZK 3.8.4 uses the LogBack feature for logging, which uses 2 libraries logback-core-1.2.13.jar logback-classic-1.2.13.jar ( Missing Jar ) one of them was missing from my bundle. I downloaded and copied the jar in zookeeper/lib/ dir and restarted the service. This worked for me. Steps - Locate the logback jar, download the other missing jar, and paste in that dir. cd /opt/ wget https://repo1.maven.org/maven2/ch/qos/logback/logback-classic/1.2.13/logback-classic-1.2.13.jar cksum logback-classic-1.2.13.jar | grep -i "103870831 232073" chown root:root logback-classic-1.2.13.jar cp logback-classic-1.2.13.jar /usr/odp/3.3.6.0-1/zookeeper/lib/ cp logback-classic-1.2.13.jar /usr/odp/3.3.6.0-1/cruise-control3/dependant-libs/
... View more
01-07-2025
11:23 PM
1 Kudo
Solved: Simply replacing the Java version with 11 resolves the issue. It's crucial to check the Java version of the client that initialized the data into the cluster. This is the key point.
... View more
01-06-2025
08:15 AM
@Shelton / @MattWho , My NIFI is behind corporate proxy, because of that In production, NIFI is not able to hit the azure OIDC discovery url. could you please help me on it ? Thanks, spiker
... View more
12-31-2024
09:47 AM
2 Kudos
@MrNicen This is a very common problem where the table gets stuck in a DISABLING state. First, please try these series of diagnostic and repair steps: First, verify the current state: echo "scan 'hbase:meta'" | hbase shell Try to force the table state change using HBCK2: # Set table to ENABLED state hbase hbck -j ./hbase-hbck2-2.0.2.jar setTableState <table_name> ENABLED # Download HBCK2 if not already present wget https://repository.apache.org/content/repositories/releases/org/apache/hbase/hbase-hbck2/2.0.2/hbase-hbck2-2.0.2.jar If that doesn't work, try cleaning the znode: # Connect to ZooKeeper ./zkCli.sh -server localhost:2181 # Check the table znode ls /hbase/table/<table_name> # Delete the table znode if present rmr /hbase/table/<table_name> If the issue persists, try manually updating the meta table: hbase shell # Disable table disable '<table_name>' # Wait a few seconds, then enable enable '<table_name>' # If that fails, try force disable disable_all '<table_name>' If still stuck, try these repair commands: # Clear the META table state echo "put 'hbase:meta', '<table_name>', 'table:state', '\x08\x00'" | hbase shell # Recreate the regions hbase hbck -j ./hbase-hbck2-2.0.2.jar assigns <table_name> As a last resort, try a full cleanup: # Stop HBase ./bin/stop-hbase.sh # Clear ZooKeeper data ./zkCli.sh -server localhost:2181 rmr /hbase # Remove the META directory rm -rf /hbase/data/hbase/meta # Start HBase ./bin/start-hbase.sh # Recreate the table structure hbase shell create '<table_name>', {NAME => 'cf'} # Adjust column families as needed If none of these steps work, we can try a more aggressive approach: Back up your data: hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot <snapshot_name> -copy-to hdfs://backup-cluster/hbase Try a clean META rebuild: # Stop HBase ./bin/stop-hbase.sh # Clear META rm -rf /hbase/data/default/hbase/meta # Start HBase in repair mode env HBASE_OPTS="-XX:+UseParNewGC -XX:+UseConcMarkSweepGC" ./bin/hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair # Start HBase normally ./bin/start-hbase.sh Additional troubleshooting tips: Check HBase logs for specific errors: tail -f /var/log/hbase/hbase-master.log Verify cluster health: hbase hbck -details Monitor region transitions: echo "scan 'hbase:meta', {COLUMNS => 'info:regioninfo'}" | hbase shell If you encounter any specific errors during these steps, please share them and I can provide more targeted solutions.
... View more
12-25-2024
09:50 PM
1 Kudo
Hi @Shelton sorry for late reply: Here is my json response 400: Here is invoke HTTP: I have used below properties in invokhttp like below: Always Output Response : Set to true. Output Response Attributes: 400 and 500 Here is RouteOnAttribute: Is it like that?
... View more
12-25-2024
03:57 PM
1 Kudo
@Roomka Looks an old post all the same I will try to answer and I hope its still relevant to others too The challenges you're facing with Apache NiFi's development life cycle stem from its design, which does not fully separate code/logic and environment-specific configurations. To address this and create a robust process for porting flows from dev to QA to prod, consider the following solutions: 1. Use Parameter Contexts for Configuration Management NiFi supports Parameter Contexts, which can be used to externalize environment-specific configurations. This allows you to separate logic from environment-specific details. Steps: Define Parameter Contexts for each environment (e.g., DevContext, QAContext, ProdContext). Externalize configurations like: Number of threads Cron schedules Database connection strings API endpoints When deploying to a new environment: Import the flow without the environment-specific Parameter Contexts. Assign the appropriate Parameter Context for that environment. 2. Use NiFi Registry for Flow Versioning and Promotion The NiFi Registry provides a way to version control your flows and manage deployments across environments. Steps: Set up NiFi Registry and connect your NiFi instances to it. Use the Registry to version your flows. Promote flows from dev to QA to prod by exporting/importing them through the Registry. In each environment, override parameters using the appropriate Parameter Context. 3. Handle Env.-Specific Differences with External Configuration Management If Parameter Contexts are insufficient, consider externalizing configurations entirely using tools like Consul, AWS Parameter Store, or environment variables. Steps: Store all environment-specific configurations in an external tool. Use a custom script or a NiFi processor to fetch configurations dynamically at runtime. This ensures the flow logic remains the same across environments, and only the external configurations vary. 4. Adopt Best Practices for Flow Design To reduce the impact of embedded environment-specific details, follow these design principles: Avoid hardcoding resource-intensive configurations like thread counts or cron schedules into the flow. Use NiFi Variables or Parameters wherever possible to make configurations flexible. Split flows into smaller, reusable components to reduce complexity and improve maintainability. 5. Use Deployment Automation Automate the deployment process to ensure consistency and reduce manual errors. Use tools like Ansible, Terraform, or Jenkins to automate flow deployment. Include steps to set up Parameter Contexts or fetch external configurations as part of the deployment pipeline. 6. Mitigating the 12-Factor Principles Concern While NiFi isn't designed to fully adhere to 12-factor principles, you can adapt your processes to bridge this gap: Codebase: Manage flow versions centrally using NiFi Registry. Config: Externalize environment-specific configurations using Parameter Contexts or external configuration management tools. Build, Release, Run: Standardize your flows and deployment pipeline across environments. Disposability: Test flows rigorously in QA to ensure they can handle unexpected failures gracefully. Hope these points give you a better picture and possibly an answer Happy Haddoping
... View more