Member since
04-04-2022
188
Posts
5
Kudos Received
9
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
210 | 06-11-2025 02:18 AM | |
224 | 03-26-2025 01:54 PM | |
353 | 01-08-2025 02:51 AM | |
520 | 01-08-2025 02:46 AM | |
682 | 01-08-2025 02:40 AM |
06-11-2025
02:18 AM
Hello @Artem_Kuzin I looked into this issue and it appears to be a bug in CDP version 7.3.1, which has been resolved in version 7.3.2.0
... View more
03-26-2025
01:54 PM
Hello @snm1523 Add Connection: close header to your curl requests to ensure the connection is closed after each request. Alternatively, use --no-keepalive in the curl command to prevent persistent connections. If necessary, adjust server settings to force connections to close after each request.
... View more
03-26-2025
01:51 PM
Hello @nowy19 Thanks for the posting your query and here is my detailed ans of your query It seems that Atlas checks if the relationship exists within the uploaded batch, rather than between the batch and already uploaded entities. There are a couple of approaches you could consider to avoid timeouts during bulk uploads: Uploading Related Entities in Separate Batches: It’s possible to upload related entities in separate batches. However, you need to ensure that dependencies are respected between batches. If relationships between entities need to be established, you may have to upload them in an order that ensures the relationships can be checked and linked after the entities are uploaded. Batch Size Management: If timeouts are an issue, you might want to consider reducing the batch size for uploads. Smaller batches can reduce the load on the system and help avoid timeouts. This might involve splitting larger datasets into smaller, more manageable chunks. Optimize Atlas Configuration: Adjusting some configurations in Atlas, such as increasing the batch size limit or optimizing the database (e.g., using indexing) might help to handle larger uploads efficiently. Asynchronous Upload Strategy: If possible, you can consider uploading entities asynchronously to prevent long-running operations that can lead to timeouts. This allows the system to handle multiple requests in parallel without overwhelming it. Increase Timeout Settings: If you're encountering timeouts during bulk uploads, you could also look into adjusting timeout settings for the upload process, either at the Atlas server or API level, if that's a feasible option. If you want to upload everything in one batch but avoid timeouts, breaking down the process into smaller, logical steps, while maintaining the required relationships, is usually the most effective approach.
... View more
01-22-2025
05:25 AM
Hello @DreamDelerium Thanks for the sharing this question with us as I checked both both datasource both datasource lineage data should be same because the datasystem_datatransfer part of datasystem_datasource so the origin will be same for lineage data now I comes with you first question why creating this second lineage would impact the first? Not it couldn't impact. Please let me know if you required any clarification for your above question
... View more
01-08-2025
02:51 AM
Error Code: ATLAS-404-00-007 Invalid instance creation/updation parameters passed: type_name.entity_name: mandatory attribute value missing in type type_name." This error indicates that when creating or updating an entity (likely in Apache Atlas or a similar system), a required attribute value for that entity is missing or not provided. Specifically, the entity's type (indicated as type_name.entity_name) is missing a mandatory attribute value defined for that type. Error Code: ATLAS-400-00-08A This error typically occurs when you're trying to upload or import a ZIP file that is either empty or does not contain any valid data. Verify that the ZIP file you're attempting to upload actually contains data. Check the contents of the file and ensure that it's not empty. If it should contain data, try recreating the ZIP file or ensure it's properly packaged before importing.
... View more
01-08-2025
02:40 AM
@dhughes20 Please check this jira :https://issues.apache.org/jira/browse/ATLAS-1729
... View more
01-08-2025
02:38 AM
@dhughes20 This is looks bug please check this https://issues.apache.org/jira/browse/ATLAS-3958
... View more
08-29-2024
06:58 PM
2 Kudos
Hbase master was in intializing state , since there were many server break down processes runing due to hbase master failure After clearing the /hbase znode from zkcli the issue was resolved
... View more
08-26-2024
02:15 AM
2 Kudos
yes, it works with PUT command in place not get command.
... View more
08-25-2024
11:46 PM
3 Kudos
Hi All, Will clarify the WORKARROUND in case it helps anyone... The wizard installer is composed of 2 phases defined in the "Select Repository" screen: [Phase1] The Cloudera Manager processes (jdk if needed, cloudera-scm-agent, cloudera daemons) [Phase2] The Runtime (the roles like impala, kudu, Hive...) Main issue was that the install attempt for the Phase1 did not took into accout the Poxy added for the Runtime repo, failing all the time, even setting the proxy in /etc/bashrc for letting eny user to use it. Here are two solutions: 1º to manuall install the services required by the [Phase1] using the /etc/yum.repos.d/cloudera-manager.repo (used yum in my case) openjdk8-1.8.0_372_cloudera-1.x86_64, cloudera-manager-agent.x86_64 in all servers. then retry the wizard (it will detect the packages are already installed and jump to the Runtime part) 2º setting the proxy in /etc/yum.conf proxy=http://user:pass@proxy-IP:8080 and disable all other repos (I had to do this to avoid yum to fail) NOTE: maybe the installer fails because the Cloudera-manager-agent.x86_64 has many dependencies and some of then might be in the OS repo (This is why I installed it manually)
... View more