Member since
04-04-2022
134
Posts
5
Kudos Received
3
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
106 | 01-08-2025 02:46 AM | |
3059 | 08-22-2024 05:31 AM | |
288 | 08-21-2024 12:49 PM |
01-22-2025
05:25 AM
Hello @DreamDelerium Thanks for the sharing this question with us as I checked both both datasource both datasource lineage data should be same because the datasystem_datatransfer part of datasystem_datasource so the origin will be same for lineage data now I comes with you first question why creating this second lineage would impact the first? Not it couldn't impact. Please let me know if you required any clarification for your above question
... View more
01-08-2025
02:51 AM
Error Code: ATLAS-404-00-007 Invalid instance creation/updation parameters passed: type_name.entity_name: mandatory attribute value missing in type type_name." This error indicates that when creating or updating an entity (likely in Apache Atlas or a similar system), a required attribute value for that entity is missing or not provided. Specifically, the entity's type (indicated as type_name.entity_name) is missing a mandatory attribute value defined for that type. Error Code: ATLAS-400-00-08A This error typically occurs when you're trying to upload or import a ZIP file that is either empty or does not contain any valid data. Verify that the ZIP file you're attempting to upload actually contains data. Check the contents of the file and ensure that it's not empty. If it should contain data, try recreating the ZIP file or ensure it's properly packaged before importing.
... View more
01-08-2025
02:46 AM
Hello @DreamDelerium Thanks for the sharing this question with us as I checked both both datasource both datasource lineage data should be same because the datasystem_datatransfer part of datasystem_datasource so the origin will be same for lineage data now I comes with you first question why creating this second lineage would impact the first? Not it couldn't impact. Please let me know if you required any clarification for your above question
... View more
01-08-2025
02:40 AM
@dhughes20 Please check this jira :https://issues.apache.org/jira/browse/ATLAS-1729
... View more
01-08-2025
02:38 AM
@dhughes20 This is looks bug please check this https://issues.apache.org/jira/browse/ATLAS-3958
... View more
08-28-2024
08:57 AM
@hadoopranger You hbase region server is healthy? and execute this command to get hbase region server health status hbase hbck -details &> /tmp/hbck_details_$(date +"%Y_%m_%d_%H_%M_%S").txt
... View more
08-23-2024
01:59 AM
1 Kudo
@Juanes The error message suggests that the system is not registered to Red Hat Subscription Management and recommends using subscription-manager to register. Additionally, it states that there is an issue with finding a valid baseurl for the cloudera-manager repository. To resolve this issue, you can follow these steps: Register the system with Red Hat Subscription Management using the subscription-manager tool. Run the following command as root or with sudo privileges: subscription-manager register Follow the prompts and provide the necessary information to register the system. 2.Once the system is registered, you need to enable the required repository. Run the following command: subscription-manager repos --enable=<repository-name> Replace <repository-name> with the specific repository name you want to enable, such as 'rhel-7-server-rpms' or 'rhel-7-server-optional-rpms'. You may also need to enable additional repositories depending on your requirements. 3.After enabling the repository, try running the command again. If you still encounter the same error related to the cloudera-manager repository, you may need to check the repository configuration file for any issues. The configuration file is usually located in the '/etc/yum.repos.d/' directory. Check if the 'baseurl' parameter is correctly set.
... View more
08-23-2024
01:46 AM
1 Kudo
@eddy28 Have you configured this property properly "<name>ranger.plugin.hdfs.service.name</name> <value>hadoopdev</value> <!-- Replace with your Ranger service name --> " Do you have any exception after configuring the suggested property?
... View more
08-22-2024
05:31 AM
1 Kudo
@bigdatacm Can you try this I hope it will work i.e curl " https://URL/api/atlas/v2/entity/guid/7g5678h9-4003-407a-ac00-791c7c53e6d5" \
-i -X GET --negotiate \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-u username:password >> test_oracle_tbl.json -k and then modify the test_oracle_tbl.json and make a partial update to the entity curl "https://hostname:port/api/atlas/v2/entity/uniqueAttribute/type/datsetname?attr:qualifiedName=name"
-i -X GET --negotiate \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-u username:password -d@test_oracle_tbl.json -k
... View more
08-22-2024
04:46 AM
1 Kudo
@eddy28 To enable audit logging for superuser actions, you need to update the HDFS configuration. Follow these steps: Open the hdfs-site.xml file in the Hadoop configuration directory ($HADOOP_HOME/etc/hadoop). Add the following properties to enable audit logging for superuser actions: <property> <name>dfs.namenode.inode.attributes.provider.class</name> <value>org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer</value> </property> <property> <name>ranger.plugin.hdfs.service.name</name> <value>hadoopdev</value> <!-- Replace with your Ranger service name --> </property> Save the changes and restart the HDFS service for the new configuration to take effect. With this configuration, the superuser actions should generate audit logs, which will be visible in the Ranger UI alongside other HDFS actions. Note-Please test this configuration with you uat cluster
... View more