Member since
04-04-2022
112
Posts
5
Kudos Received
2
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
2358 | 08-22-2024 05:31 AM | |
175 | 08-21-2024 12:49 PM |
08-29-2024
06:58 PM
2 Kudos
Hbase master was in intializing state , since there were many server break down processes runing due to hbase master failure After clearing the /hbase znode from zkcli the issue was resolved
... View more
08-26-2024
02:15 AM
2 Kudos
yes, it works with PUT command in place not get command.
... View more
08-25-2024
11:46 PM
3 Kudos
Hi All, Will clarify the WORKARROUND in case it helps anyone... The wizard installer is composed of 2 phases defined in the "Select Repository" screen: [Phase1] The Cloudera Manager processes (jdk if needed, cloudera-scm-agent, cloudera daemons) [Phase2] The Runtime (the roles like impala, kudu, Hive...) Main issue was that the install attempt for the Phase1 did not took into accout the Poxy added for the Runtime repo, failing all the time, even setting the proxy in /etc/bashrc for letting eny user to use it. Here are two solutions: 1º to manuall install the services required by the [Phase1] using the /etc/yum.repos.d/cloudera-manager.repo (used yum in my case) openjdk8-1.8.0_372_cloudera-1.x86_64, cloudera-manager-agent.x86_64 in all servers. then retry the wizard (it will detect the packages are already installed and jump to the Runtime part) 2º setting the proxy in /etc/yum.conf proxy=http://user:pass@proxy-IP:8080 and disable all other repos (I had to do this to avoid yum to fail) NOTE: maybe the installer fails because the Cloudera-manager-agent.x86_64 has many dependencies and some of then might be in the OS repo (This is why I installed it manually)
... View more
08-23-2024
02:50 AM
1 Kudo
Hi @vats , Yes I have added configs properly. Kindly see the screenshot below: My service name is also hadoopdev Regards, Aditya
... View more
08-22-2024
04:33 AM
1 Kudo
@bigdatacm May be the configuration for the Hive hook in Atlas is not properly set up. Ensure that the Atlas hook is correctly configured in the Hive configuration file (hive-site.xml). Check for any errors or warnings related to the hook configuration in the Atlas and Hive logs.
... View more
08-21-2024
01:03 PM
Currently, there is no option in Atlas to delete classifications from deleted entities through the UI. When an entity is deleted, its classification may still appear, but there is no direct method to remove it via the Atlas interface. Workaround: To delete a classification from a deleted entity, you can use the following API command: curl -iv -u {username} -X DELETE https://{atlashost:port}/api/atlas/v2/entity/guid/{guid}/classification/{Classification Name} Example: If: Username: admin Atlas Host: cloudera.com Port: 21000 GUID: 95cdef42-8586-4933-b7b4-XXXXXXXXX Classification Name: Test1 Then, the command will be: curl -iv -u admin -X DELETE https://cloudera.com:21000/api/atlas/v2/entity/guid/95cdef42-8586-4933-b7b4-d4942cbd5afb/classification/Test1 The expected response should be: HTTP/1.1 204 No Content You can find the GUID of historical entities from the following resources: =>https://atlas.apache.org/api/v2/resource_EntityREST.html#resource_EntityREST_deleteClassificationByUniqueAttribute_DELETE =>https://community.cloudera.com/t5/Support-Questions/How-do-I-delete-an-Atlas-tag-which-are-associated-with/m-p/203798 Improvement Suggestions: Investigate the cause of classifications persisting after the associated entity is deleted. Add a UI feature allowing users to delete classifications for deleted entities directly. Implement a confirmation prompt to avoid accidental deletions. These improvements will streamline the process, ensure consistency, and provide a more user-friendly experience, reducing the reliance on API calls.
... View more
Labels:
08-21-2024
12:49 PM
If you import the Hive table through a script, the lineage data will not be visible. To view the lineage data, your metadata must sync automatically.
... View more