Member since
06-16-2020
55
Posts
14
Kudos Received
5
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
865 | 10-23-2024 11:21 AM | |
829 | 10-22-2024 07:59 AM | |
830 | 10-22-2024 07:37 AM | |
488 | 10-21-2024 09:25 AM | |
2269 | 06-16-2023 07:23 AM |
03-11-2025
07:59 AM
Hi, I have a similar issue. When I use "Advanced" to check what output nifi would provide, I get the output there, but when I check the queue after the JoltTransformJSON has processed the output, it says null. I changed "Jolt Transform DSL" from "chain" to "default", as it eliminated the space that was there between the square brackets and the array elements in an array. This is my Jolt specification - [ { "operation": "shift", "spec": { "uid": "uid", "location_name": "name", "location_city": "city", "age_min": "age_min", "age_max": "age_max", "firstdate_begin": "begin", "lastdate_end": "end", "location_coordinates": { "lon": "location[0]", "lat": "location[1]" } } } ] and this is the expected output - { "uid": "54570223", "name": "Théâtre Plaza", "city": "Montréal", "age_min": null, "age_max": null, "begin": "2024-03-23T23:30:00+00:00", "end": "2024-03-24T01:00:00+00:00", "location": [ -73.603196, 45.536315 ] } EDIT - I edited my jolt specification, and it looks like this now- [ { "operation": "shift", "spec": { "*": { "uid": "uid", "location_name": "location_name", "location_city": "location_city", "age_min": "age_min", "age_max": "age_max", "firstdate_begin": "firstdate_begin", "lastdate_end": "lastdate_end", "location_coordinates": { "lon": "location[0]", "lat": "location[1]" } } } } ] I am getting the desired output - { "uid": "70029814", "location_name": "Sanctuaire du Saint-Sacrement", "location_city": "Montreal", "age_min": null, "age_max": null, "firstdate_begin": "2019-04-13T18:00:00+00:00", "lastdate_end": "2019-04-14T10:00:00+00:00", "location": [-73.581721, 45.525243] } But the issue now is that after processing, the output looks something like this - { "uid" : [ "54570223" ], "location_name" : [ "Théâtre Plaza" ], "location_city" : [ "Montréal" ], "age_min" : [ 16 ], "age_max" : [ 99 ], "firstdate_begin" : [ "2024-03-23T23:30:00+00:00" ], "lastdate_end" : [ "2024-03-24T01:00:00+00:00" ], "location" : [ [ -73.603196 ], [ 45.536315 ] ] } Working on keeping only the location as arrays. 😅😅😅
... View more
03-02-2025
07:19 AM
@drewski7 The error message says there's no EntityManager with an actual transaction available. That suggests that the code trying to persist the user isn't running within a transactional context. In Spring applications, methods that modify the database usually need to be annotated with `@Transactional` to ensure they run within a transaction. Looking at the stack trace, the error occurs in `XUserMgr$ExternalUserCreator.createExternalUser`, which calls `UserMgr.createUser`, which in turn uses `BaseDao.create`. The `create` method in `BaseDao` is trying to persist an entity but there's no active transaction. So maybe the `createUser` method or the code calling it isn't properly transactional. In version 2.4.0, this worked, so something must have changed in 2.5.0. Perhaps the upgrade introduced changes in how transactions are managed. Maybe a method that was previously transactional no longer is, or the transaction boundaries have shifted. Step 1: Verify Database Schema Compatibility Ranger 2.5.0 may require schema updates. Ensure the database schema is compatible with the new version: Check Upgrade Documentation: Review the Ranger 2.5.0 Release Notes for required schema changes. Example: If migrating from 2.4.0 to 2.5.0, you may need to run SQL scripts like x_portal_user_DDL.sql or apache-ranger-2.5.0-schema-upgrade.sql. Run Schema Upgrade Scripts: Locate the schema upgrade scripts in the Ranger installation directory (ranger-admin/db/mysql/patches) and apply them: mysql -u root -p ranger < apache-ranger-2.5.0-schema-upgrade.sql Validate the Schema: Confirm that the x_portal_user table exists and has the expected columns (e.g., login_id, user_role). Step 2: Check Transaction Management Configuration The error suggests a missing @Transactional annotation or misconfigured transaction manager in Ranger 2.5.0: Review Code/Configuration Changes: Compare the transaction management configurations between Ranger 2.4.0 and 2.5.0. Key files: ranger-admin/ews/webapp/WEB-INF/classes/conf/application.properties ranger-admin/ews/webapp/WEB-INF/classes/spring-beans.xml Apache Ranger JIRA: Search for issues like RANGER-XXXX related to transaction management in Ranger 2.5.0. Ensure Transactional Annotations: In Ranger 2.5.0, the method createUser in UserMgr.java or its caller must be annotated with @Transactional to ensure database operations run in a transaction. @Transactional public void createUser(...) { ... } 3. Debug Transaction Boundaries: Enable transaction logging in log4j.properties to trace transaction activity log4j.logger.org.springframework.transaction=DEBUG log4j.logger.org.springframework.orm.jpa=DEBUG Step 3: Manually Create the User (Temporary Workaround) If the user drew.nicolette is missing from x_portal_user, manually insert it into the database: INSERT INTO x_portal_user (login_id, password, user_role, status) VALUES ('drew.nicolette', 'LDAP_USER_PASSWORD_HASH_IF_APPLICABLE', 'ROLE_USER', 1); Note: This bypasses the transaction error but is not a permanent fix. Step 4: Verify LDAP Configuration Ensure LDAP settings in ranger-admin/ews/webapp/WEB-INF/classes/conf/ranger-admin-site.xml are correct for Ranger 2.5.0: <property>
<name>ranger.authentication.method</name>
<value>LDAP</value>
</property>
<property>
<name>ranger.ldap.url</name>
<value>ldap://your-ldap-server:389</value>
</property> Step 5: Check for Known Issues Apache Ranger JIRA: Search for issues like RANGER-XXXX related to transaction management in Ranger 2.5.0. 2. Apply Patches: If a patch exists (e.g., for missing @Transactional annotations), apply it to the Ranger 2.5.0 codebase. Step 6: Test with a New User Attempt to log in with a different LDAP user to see if the issue is specific to drew.nicolette or a systemic problem. If the error persists for all users, focus on transaction configuration or schema issues. If only drew.nicolette fails, check for conflicts in the x_portal_user table (e.g., duplicate entries). Final Checks Logs: Monitor ranger-admin.log and catalina.out for transaction-related errors after applying fixes. Permissions: Ensure the database user has write access to the x_portal_user table. Dependencies: Confirm that Spring and JPA library versions match Ranger 2.5.0 requirements. Happy hadooping
... View more
02-20-2025
06:31 AM
I just know from many years of setting up ranger within ambari with usersync. I doubt current CDP docs will strictly call it out.
... View more
12-18-2024
08:50 AM
@drewski7 I have just picked your ticket I hope I can help you resolve this issue if its still unresolved. There are are couple of configurations changes and implementations that have to done. 1. Overview OAuth allows Kafka clients to obtain access tokens from an external authentication provider like OAuth providers to authenticate with the Kafka broker. This process involves configuring the Kafka broker, OAuth provider, and Kafka clients. 2. Prerequisites Kafka cluster with SASL/OAUTHBEARER enabled. An OAuth provider set up to issue access tokens. Kafka clients that support SASL/OAUTHBEARER. Required libraries for OAuth integration (e.g. kafka-clients, oauth2-client, or keycloak adapters). 3. Procedure Step 1: Configure the OAuth Provider Set up an OAuth provider (e.g., Keycloak, Okta, etc.) to act as the identity provider (IdP). Register a new client application for Kafka in the OAuth provider: Set up client ID and client secret for Kafka clients. Configure scopes, roles, or claims required for authorization. Enable grant types like Client Credentials or Password (depending on your use case). Note down the following details: Authorization Server URL (e.g.https://authlogin.northwind.com/token). Client ID and Client Secret. Step 2: Configure the Kafka Broker Enable SASL/OAUTHBEARER Authentication: Edit the Kafka broker configuration (/config/server.properties) sasl.enabled.mechanisms=OAUTHBEARER listener.name.<listener-name>.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.token.endpoint.uri="https://auth.example.com/token" \ oauth.client.id="kafka-broker-client-id" \ oauth.client.secret="kafka-broker-client-secret" \ oauth.scope="kafka-scope"; Replace <listener-name> with (SASL_PLAINTEXT, SASL_SSL) as appropriate. Configure ACLs (Optional): If using authorization, configure ACLs to grant specific permissions to authenticated users. Restart the Kafka Broker: Restart the Kafka broker to apply the changes sudo systemctl restart kafka Step 3: Configure the Kafka Client Add required dependencies to your Kafka client application: For Java applications, add the Kafka and OAuth dependencies to your pom.xml or build.gradle. pom.xml example <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>3.0.0</version> </dependency> <dependency> <groupId>com.nimbusds</groupId> <artifactId>oauth2-oidc-sdk</artifactId> <version>9.4</version> </dependency> 2. Configure OAuth in the Kafka Client: Specify the SASL mechanism and the OAuth token endpoint in the client configuration Properties props = new Properties(); props.put("bootstrap.servers", "broker1:9092,broker2:9092"); props.put("security.protocol", "SASL_SSL"); props.put("sasl.mechanism", "OAUTHBEARER"); props.put("sasl.jaas.config", "org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required " + "oauth.token.endpoint.uri=\"https://auth.example.com/token\" " + "oauth.client.id=\"kafka-client-id\" " + "oauth.client.secret=\"kafka-client-secret\";"); 3. Implement Token Retrieval (Optional): Use an external tool or library to retrieve and manage tokens if you need a custom implementation. curl -X POST -d "grant_type=client_credentials&client_id=kafka-client-id&client_secret=kafka-client-secret" \ https://auth.example.com/token 4. Create the Kafka Producer/Consumer: Use the above configuration to initialize a Kafka producer or consumer KafkaProducer<String, String> producer = new KafkaProducer<>(props); Step 4: Test the Authentication Produce and consume messages to verify OAuth-based authentication: kafka-console-producer.sh --broker-list <broker-address> --topic <topic-name> --producer.config <client-config> kafka-console-consumer.sh --bootstrap-server <broker-address> --topic <topic-name> --consumer.config <client-config> Ensure logs indicate successful authentication using SASL/OAUTHBEARER. Step 5: Monitor and Debug Check Kafka broker logs for errors related to OAuth authentication. Verify token expiration and renewal mechanisms. Ensure the OAuth provider is reachable from the Kafka brokers and clients. Happy Hadooping I hope the above steps helps in the diagnosis and resolution of you Kafka OAuth issue
... View more
11-04-2024
06:02 AM
The problem was my StandardRestrictedSSLContextService didn't include a keystore. For some reason, I thought it was just one way SSL communication. Once I added the keystore to the ContextService it was authenticating correctly! Thanks @MattWho !
... View more
10-29-2024
11:48 AM
1 Kudo
@Mikkkk Has the reply helped resolve your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future. Thanks.
... View more
10-25-2024
05:50 AM
2 Kudos
Finally I replaced LookupRecord processor with group of Fork/JoinEnrichment.
... View more
10-23-2024
12:49 PM
2 Kudos
Hi @drewski7 , Now I have another minor problem.. how do I send you a six pack to thank you for the help??? 😉 Been banging my head on this for days, still going through the logic to learn from it. Really appreciate the detailed explanation. 👍 🙂 Cheers! :clinking_beer_mugs:
... View more
10-22-2024
10:40 AM
Thanks Matt, Was able to resolve the issue with your putFile solution.
... View more