Member since
04-04-2022
188
Posts
5
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
210 | 06-11-2025 02:18 AM | |
224 | 03-26-2025 01:54 PM | |
353 | 01-08-2025 02:51 AM | |
519 | 01-08-2025 02:46 AM | |
682 | 01-08-2025 02:40 AM |
08-23-2024
01:46 AM
1 Kudo
@eddy28 Have you configured this property properly "<name>ranger.plugin.hdfs.service.name</name> <value>hadoopdev</value> <!-- Replace with your Ranger service name --> " Do you have any exception after configuring the suggested property?
... View more
08-22-2024
05:31 AM
1 Kudo
@bigdatacm Can you try this I hope it will work i.e curl " https://URL/api/atlas/v2/entity/guid/7g5678h9-4003-407a-ac00-791c7c53e6d5" \
-i -X GET --negotiate \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-u username:password >> test_oracle_tbl.json -k and then modify the test_oracle_tbl.json and make a partial update to the entity curl "https://hostname:port/api/atlas/v2/entity/uniqueAttribute/type/datsetname?attr:qualifiedName=name"
-i -X GET --negotiate \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-u username:password -d@test_oracle_tbl.json -k
... View more
08-22-2024
04:46 AM
1 Kudo
@eddy28 To enable audit logging for superuser actions, you need to update the HDFS configuration. Follow these steps: Open the hdfs-site.xml file in the Hadoop configuration directory ($HADOOP_HOME/etc/hadoop). Add the following properties to enable audit logging for superuser actions: <property> <name>dfs.namenode.inode.attributes.provider.class</name> <value>org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer</value> </property> <property> <name>ranger.plugin.hdfs.service.name</name> <value>hadoopdev</value> <!-- Replace with your Ranger service name --> </property> Save the changes and restart the HDFS service for the new configuration to take effect. With this configuration, the superuser actions should generate audit logs, which will be visible in the Ranger UI alongside other HDFS actions. Note-Please test this configuration with you uat cluster
... View more
08-22-2024
04:33 AM
1 Kudo
@bigdatacm May be the configuration for the Hive hook in Atlas is not properly set up. Ensure that the Atlas hook is correctly configured in the Hive configuration file (hive-site.xml). Check for any errors or warnings related to the hook configuration in the Atlas and Hive logs.
... View more
08-21-2024
01:53 PM
when a table is dropped from Hive, the corresponding entity in Atlas should ideally be marked as deleted. However, there are a few potential reasons why the entity might still be shown as active 1.Delayed Synchronization 2.Configuration Issue
... View more
08-21-2024
01:48 PM
@eddy28 If you are performing operations as the superuser (hdfs) and no audit logs are generated, it is likely because the superuser is bypassing the HDFS permissions and Ranger policies. The superuser has administrative privileges and can perform any action in HDFS without being subject to the policies defined in Ranger. By default, HDFS does not generate audit logs for actions performed by the superuser. If you want to track the activities of the superuser, you can enable audit logging specifically for the superuser.
... View more
08-21-2024
01:40 PM
@jhoney12 https://atlas.apache.org/api/v2/resource_EntityREST.html#resource_EntityREST_partialUpdateEntityAttrByGuid_PUT Note-Entity Partial Update - Add/Update entity attribute identified by its GUID. Supports only uprimitive attribute type and entity references. does not support updation of complex types like arrays, maps Null updates are not possible
... View more
08-21-2024
01:38 PM
@jhoney12 yes Atlas Rest API supports , but you need to download the JSON for the entity first. i.e curl " https://URL/api/atlas/v2/entity/guid/7g5678h9-4003-407a-ac00-791c7c53e6d5" \
-i -X GET --negotiate \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-u username:password >> test_oracle_tbl.json -k and then modify the test_oracle_tbl.json and make a partial update to the entity curl "https://hostname:port/api/atlas/v2/entity/uniqueAttribute/type/datsetname?attr:qualifiedName=name"
-i -X GET --negotiate \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-u username:password -d@test_oracle_tbl.json -k
... View more
08-21-2024
01:14 PM
@bigdatacm This is the one of option is to delete the entity via the UI or REST API.
... View more
08-21-2024
01:03 PM
Currently, there is no option in Atlas to delete classifications from deleted entities through the UI. When an entity is deleted, its classification may still appear, but there is no direct method to remove it via the Atlas interface. Workaround: To delete a classification from a deleted entity, you can use the following API command: curl -iv -u {username} -X DELETE https://{atlashost:port}/api/atlas/v2/entity/guid/{guid}/classification/{Classification Name} Example: If: Username: admin Atlas Host: cloudera.com Port: 21000 GUID: 95cdef42-8586-4933-b7b4-XXXXXXXXX Classification Name: Test1 Then, the command will be: curl -iv -u admin -X DELETE https://cloudera.com:21000/api/atlas/v2/entity/guid/95cdef42-8586-4933-b7b4-d4942cbd5afb/classification/Test1 The expected response should be: HTTP/1.1 204 No Content You can find the GUID of historical entities from the following resources: =>https://atlas.apache.org/api/v2/resource_EntityREST.html#resource_EntityREST_deleteClassificationByUniqueAttribute_DELETE =>https://community.cloudera.com/t5/Support-Questions/How-do-I-delete-an-Atlas-tag-which-are-associated-with/m-p/203798 Improvement Suggestions: Investigate the cause of classifications persisting after the associated entity is deleted. Add a UI feature allowing users to delete classifications for deleted entities directly. Implement a confirmation prompt to avoid accidental deletions. These improvements will streamline the process, ensure consistency, and provide a more user-friendly experience, reducing the reliance on API calls.
... View more
Labels:
- « Previous
-
- 1
- 2
- Next »