Member since
01-19-2017
3679
Posts
632
Kudos Received
372
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 880 | 06-04-2025 11:36 PM | |
| 1471 | 03-23-2025 05:23 AM | |
| 728 | 03-17-2025 10:18 AM | |
| 2620 | 03-05-2025 01:34 PM | |
| 1739 | 03-03-2025 01:09 PM |
03-02-2025
07:19 AM
@drewski7 The error message says there's no EntityManager with an actual transaction available. That suggests that the code trying to persist the user isn't running within a transactional context. In Spring applications, methods that modify the database usually need to be annotated with `@Transactional` to ensure they run within a transaction. Looking at the stack trace, the error occurs in `XUserMgr$ExternalUserCreator.createExternalUser`, which calls `UserMgr.createUser`, which in turn uses `BaseDao.create`. The `create` method in `BaseDao` is trying to persist an entity but there's no active transaction. So maybe the `createUser` method or the code calling it isn't properly transactional. In version 2.4.0, this worked, so something must have changed in 2.5.0. Perhaps the upgrade introduced changes in how transactions are managed. Maybe a method that was previously transactional no longer is, or the transaction boundaries have shifted. Step 1: Verify Database Schema Compatibility Ranger 2.5.0 may require schema updates. Ensure the database schema is compatible with the new version: Check Upgrade Documentation: Review the Ranger 2.5.0 Release Notes for required schema changes. Example: If migrating from 2.4.0 to 2.5.0, you may need to run SQL scripts like x_portal_user_DDL.sql or apache-ranger-2.5.0-schema-upgrade.sql. Run Schema Upgrade Scripts: Locate the schema upgrade scripts in the Ranger installation directory (ranger-admin/db/mysql/patches) and apply them: mysql -u root -p ranger < apache-ranger-2.5.0-schema-upgrade.sql Validate the Schema: Confirm that the x_portal_user table exists and has the expected columns (e.g., login_id, user_role). Step 2: Check Transaction Management Configuration The error suggests a missing @Transactional annotation or misconfigured transaction manager in Ranger 2.5.0: Review Code/Configuration Changes: Compare the transaction management configurations between Ranger 2.4.0 and 2.5.0. Key files: ranger-admin/ews/webapp/WEB-INF/classes/conf/application.properties ranger-admin/ews/webapp/WEB-INF/classes/spring-beans.xml Apache Ranger JIRA: Search for issues like RANGER-XXXX related to transaction management in Ranger 2.5.0. Ensure Transactional Annotations: In Ranger 2.5.0, the method createUser in UserMgr.java or its caller must be annotated with @Transactional to ensure database operations run in a transaction. @Transactional public void createUser(...) { ... } 3. Debug Transaction Boundaries: Enable transaction logging in log4j.properties to trace transaction activity log4j.logger.org.springframework.transaction=DEBUG log4j.logger.org.springframework.orm.jpa=DEBUG Step 3: Manually Create the User (Temporary Workaround) If the user drew.nicolette is missing from x_portal_user, manually insert it into the database: INSERT INTO x_portal_user (login_id, password, user_role, status) VALUES ('drew.nicolette', 'LDAP_USER_PASSWORD_HASH_IF_APPLICABLE', 'ROLE_USER', 1); Note: This bypasses the transaction error but is not a permanent fix. Step 4: Verify LDAP Configuration Ensure LDAP settings in ranger-admin/ews/webapp/WEB-INF/classes/conf/ranger-admin-site.xml are correct for Ranger 2.5.0: <property>
<name>ranger.authentication.method</name>
<value>LDAP</value>
</property>
<property>
<name>ranger.ldap.url</name>
<value>ldap://your-ldap-server:389</value>
</property> Step 5: Check for Known Issues Apache Ranger JIRA: Search for issues like RANGER-XXXX related to transaction management in Ranger 2.5.0. 2. Apply Patches: If a patch exists (e.g., for missing @Transactional annotations), apply it to the Ranger 2.5.0 codebase. Step 6: Test with a New User Attempt to log in with a different LDAP user to see if the issue is specific to drew.nicolette or a systemic problem. If the error persists for all users, focus on transaction configuration or schema issues. If only drew.nicolette fails, check for conflicts in the x_portal_user table (e.g., duplicate entries). Final Checks Logs: Monitor ranger-admin.log and catalina.out for transaction-related errors after applying fixes. Permissions: Ensure the database user has write access to the x_portal_user table. Dependencies: Confirm that Spring and JPA library versions match Ranger 2.5.0 requirements. Happy hadooping
... View more
02-12-2025
01:42 AM
@0tto Please could you share your NiFi logs. Happy hadooping
... View more
02-04-2025
06:35 AM
Check beeline console output and HS2 logs to identify where it gets stuck and act accordingly.
... View more
01-27-2025
01:18 AM
@ose_gold The SFTP issues appear to stem from incorrect permissions and ownership in your Docker setup. Here's the analysis and solution: Key Issues: Root ownership instead of user 'foo' for /home/foo Incorrect chroot directory permissions Docker volume mount permissions You have a volume nifi-conf:/opt/nifi/nifi-current/conf for the NiFi container, but it's not declared in the volumes section at the bottom check the addition below Docker Compose has newer versions 3.8 It might be a good idea to update to a more recent version depending on the features you need. version: '3' #Docker Compose has newer versions 3.8
services:
nifi:
image: apache/nifi:latest # Consider specifying a version
container_name: nifi
ports:
- "8089:8443"
- "5656:5656"
volumes:
- nifi-conf:/opt/nifi/nifi-current/conf
environment:
NIFI_WEB_PROXY_HOST: localhost:8089
SINGLE_USER_CREDENTIALS_USERNAME: admin
SINGLE_USER_CREDENTIALS_PASSWORD: {xxyourpasswdxx}
sftp:
image: atmoz/sftp
volumes:
- ./sftp/upload:/home/foo/upload
ports:
- "2222:22"
command: foo:pass:1001
# Add these permissions
user: "1001:1001"
environment:
- CHOWN_USERS=foo
- CHOWN_DIRS=/home/foo
volumes:
nifi-conf: Before starting containers # Set correct permissions on host
mkdir -p ./sftp/upload
chown -R 1001:1001 ./sftp/upload
chmod 755 ./sftp/upload This configuration: Sets proper user/group ownership Maintains correct chroot permissions Ensures volume mount permissions are preserved Prevents permission conflicts between host and container Happy Hadooping
... View more
01-23-2025
09:15 AM
Hello @polingsky202 , i'm facing the same problem and had the same errors on logs implementing HAproxy with 3 brokers. Have you solved this issue ? Thank you for your help. Best regards.
... View more
01-20-2025
05:41 AM
@rsurti If SAML authentication works for LAN users but not for users on Wi-Fi, even when both are on the same network, it suggests differences in how the network or devices are configured for each connection type. Here’s how you can troubleshoot and resolve this issue: DNS Resolution: Check if Wi-Fi users can resolve the identity provider (IdP) and service provider (SP) URLs correctly. nslookup idp.example.com Capture SAML requests/responses using browser developer tools (Network tab). Look for differences in: Redirect URLs Assertions Error codes Common issues include: Misconfigured callback URLs. Device Configuration Check if device firewalls or VPNs are interfering with SAML traffic over Wi-Fi. Ensure browser settings (e.g., cookie policies) do not block SAML cookies. Connect a user to LAN and Wi-Fi simultaneously (if possible) to identify differences in routing or access. Please revert Happy hadooping
... View more
01-09-2025
03:09 AM
ZK 3.8.4 uses the LogBack feature for logging, which uses 2 libraries logback-core-1.2.13.jar logback-classic-1.2.13.jar ( Missing Jar ) one of them was missing from my bundle. I downloaded and copied the jar in zookeeper/lib/ dir and restarted the service. This worked for me. Steps - Locate the logback jar, download the other missing jar, and paste in that dir. cd /opt/ wget https://repo1.maven.org/maven2/ch/qos/logback/logback-classic/1.2.13/logback-classic-1.2.13.jar cksum logback-classic-1.2.13.jar | grep -i "103870831 232073" chown root:root logback-classic-1.2.13.jar cp logback-classic-1.2.13.jar /usr/odp/3.3.6.0-1/zookeeper/lib/ cp logback-classic-1.2.13.jar /usr/odp/3.3.6.0-1/cruise-control3/dependant-libs/
... View more
01-07-2025
11:23 PM
1 Kudo
Solved: Simply replacing the Java version with 11 resolves the issue. It's crucial to check the Java version of the client that initialized the data into the cluster. This is the key point.
... View more
01-06-2025
08:15 AM
@Shelton / @MattWho , My NIFI is behind corporate proxy, because of that In production, NIFI is not able to hit the azure OIDC discovery url. could you please help me on it ? Thanks, spiker
... View more
12-31-2024
09:47 AM
2 Kudos
@MrNicen This is a very common problem where the table gets stuck in a DISABLING state. First, please try these series of diagnostic and repair steps: First, verify the current state: echo "scan 'hbase:meta'" | hbase shell Try to force the table state change using HBCK2: # Set table to ENABLED state hbase hbck -j ./hbase-hbck2-2.0.2.jar setTableState <table_name> ENABLED # Download HBCK2 if not already present wget https://repository.apache.org/content/repositories/releases/org/apache/hbase/hbase-hbck2/2.0.2/hbase-hbck2-2.0.2.jar If that doesn't work, try cleaning the znode: # Connect to ZooKeeper ./zkCli.sh -server localhost:2181 # Check the table znode ls /hbase/table/<table_name> # Delete the table znode if present rmr /hbase/table/<table_name> If the issue persists, try manually updating the meta table: hbase shell # Disable table disable '<table_name>' # Wait a few seconds, then enable enable '<table_name>' # If that fails, try force disable disable_all '<table_name>' If still stuck, try these repair commands: # Clear the META table state echo "put 'hbase:meta', '<table_name>', 'table:state', '\x08\x00'" | hbase shell # Recreate the regions hbase hbck -j ./hbase-hbck2-2.0.2.jar assigns <table_name> As a last resort, try a full cleanup: # Stop HBase ./bin/stop-hbase.sh # Clear ZooKeeper data ./zkCli.sh -server localhost:2181 rmr /hbase # Remove the META directory rm -rf /hbase/data/hbase/meta # Start HBase ./bin/start-hbase.sh # Recreate the table structure hbase shell create '<table_name>', {NAME => 'cf'} # Adjust column families as needed If none of these steps work, we can try a more aggressive approach: Back up your data: hbase org.apache.hadoop.hbase.snapshot.ExportSnapshot -snapshot <snapshot_name> -copy-to hdfs://backup-cluster/hbase Try a clean META rebuild: # Stop HBase ./bin/stop-hbase.sh # Clear META rm -rf /hbase/data/default/hbase/meta # Start HBase in repair mode env HBASE_OPTS="-XX:+UseParNewGC -XX:+UseConcMarkSweepGC" ./bin/hbase org.apache.hadoop.hbase.util.hbck.OfflineMetaRepair # Start HBase normally ./bin/start-hbase.sh Additional troubleshooting tips: Check HBase logs for specific errors: tail -f /var/log/hbase/hbase-master.log Verify cluster health: hbase hbck -details Monitor region transitions: echo "scan 'hbase:meta', {COLUMNS => 'info:regioninfo'}" | hbase shell If you encounter any specific errors during these steps, please share them and I can provide more targeted solutions.
... View more