Member since
04-04-2022
112
Posts
5
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2358 | 08-22-2024 05:31 AM | |
174 | 08-21-2024 12:49 PM |
08-28-2024
08:57 AM
@hadoopranger You hbase region server is healthy? and execute this command to get hbase region server health status hbase hbck -details &> /tmp/hbck_details_$(date +"%Y_%m_%d_%H_%M_%S").txt
... View more
08-23-2024
01:59 AM
1 Kudo
@Juanes The error message suggests that the system is not registered to Red Hat Subscription Management and recommends using subscription-manager to register. Additionally, it states that there is an issue with finding a valid baseurl for the cloudera-manager repository. To resolve this issue, you can follow these steps: Register the system with Red Hat Subscription Management using the subscription-manager tool. Run the following command as root or with sudo privileges: subscription-manager register Follow the prompts and provide the necessary information to register the system. 2.Once the system is registered, you need to enable the required repository. Run the following command: subscription-manager repos --enable=<repository-name> Replace <repository-name> with the specific repository name you want to enable, such as 'rhel-7-server-rpms' or 'rhel-7-server-optional-rpms'. You may also need to enable additional repositories depending on your requirements. 3.After enabling the repository, try running the command again. If you still encounter the same error related to the cloudera-manager repository, you may need to check the repository configuration file for any issues. The configuration file is usually located in the '/etc/yum.repos.d/' directory. Check if the 'baseurl' parameter is correctly set.
... View more
08-23-2024
01:46 AM
1 Kudo
@eddy28 Have you configured this property properly "<name>ranger.plugin.hdfs.service.name</name> <value>hadoopdev</value> <!-- Replace with your Ranger service name --> " Do you have any exception after configuring the suggested property?
... View more
08-22-2024
05:31 AM
1 Kudo
@bigdatacm Can you try this I hope it will work i.e curl " https://URL/api/atlas/v2/entity/guid/7g5678h9-4003-407a-ac00-791c7c53e6d5" \
-i -X GET --negotiate \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-u username:password >> test_oracle_tbl.json -k and then modify the test_oracle_tbl.json and make a partial update to the entity curl "https://hostname:port/api/atlas/v2/entity/uniqueAttribute/type/datsetname?attr:qualifiedName=name"
-i -X GET --negotiate \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-u username:password -d@test_oracle_tbl.json -k
... View more
08-22-2024
04:46 AM
1 Kudo
@eddy28 To enable audit logging for superuser actions, you need to update the HDFS configuration. Follow these steps: Open the hdfs-site.xml file in the Hadoop configuration directory ($HADOOP_HOME/etc/hadoop). Add the following properties to enable audit logging for superuser actions: <property> <name>dfs.namenode.inode.attributes.provider.class</name> <value>org.apache.ranger.authorization.hadoop.RangerHdfsAuthorizer</value> </property> <property> <name>ranger.plugin.hdfs.service.name</name> <value>hadoopdev</value> <!-- Replace with your Ranger service name --> </property> Save the changes and restart the HDFS service for the new configuration to take effect. With this configuration, the superuser actions should generate audit logs, which will be visible in the Ranger UI alongside other HDFS actions. Note-Please test this configuration with you uat cluster
... View more
08-22-2024
04:33 AM
1 Kudo
@bigdatacm May be the configuration for the Hive hook in Atlas is not properly set up. Ensure that the Atlas hook is correctly configured in the Hive configuration file (hive-site.xml). Check for any errors or warnings related to the hook configuration in the Atlas and Hive logs.
... View more
08-21-2024
01:53 PM
when a table is dropped from Hive, the corresponding entity in Atlas should ideally be marked as deleted. However, there are a few potential reasons why the entity might still be shown as active 1.Delayed Synchronization 2.Configuration Issue
... View more
08-21-2024
01:48 PM
@eddy28 If you are performing operations as the superuser (hdfs) and no audit logs are generated, it is likely because the superuser is bypassing the HDFS permissions and Ranger policies. The superuser has administrative privileges and can perform any action in HDFS without being subject to the policies defined in Ranger. By default, HDFS does not generate audit logs for actions performed by the superuser. If you want to track the activities of the superuser, you can enable audit logging specifically for the superuser.
... View more
08-21-2024
01:40 PM
@jhoney12 https://atlas.apache.org/api/v2/resource_EntityREST.html#resource_EntityREST_partialUpdateEntityAttrByGuid_PUT Note-Entity Partial Update - Add/Update entity attribute identified by its GUID. Supports only uprimitive attribute type and entity references. does not support updation of complex types like arrays, maps Null updates are not possible
... View more
08-21-2024
01:38 PM
@jhoney12 yes Atlas Rest API supports , but you need to download the JSON for the entity first. i.e curl " https://URL/api/atlas/v2/entity/guid/7g5678h9-4003-407a-ac00-791c7c53e6d5" \
-i -X GET --negotiate \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-u username:password >> test_oracle_tbl.json -k and then modify the test_oracle_tbl.json and make a partial update to the entity curl "https://hostname:port/api/atlas/v2/entity/uniqueAttribute/type/datsetname?attr:qualifiedName=name"
-i -X GET --negotiate \
-H 'Content-Type: application/json' \
-H 'Accept: application/json' \
-u username:password -d@test_oracle_tbl.json -k
... View more