Member since
01-16-2018
613
Posts
48
Kudos Received
109
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 778 | 04-08-2025 06:48 AM | |
| 960 | 04-01-2025 07:20 AM | |
| 916 | 04-01-2025 07:15 AM | |
| 962 | 05-06-2024 06:09 AM | |
| 1504 | 05-06-2024 06:00 AM |
04-08-2022
12:51 AM
Greetings @stephen_obrien Thanks for using Cloudera Community. We see your Team is working with our Support Team for the concerned issue. Based on the Support engagement, We shall update the Post accordingly. Regards, Smarak
... View more
04-08-2022
12:46 AM
Hello @MadhuNP Thanks for using Cloudera Community. We see your Team is working with our Support Team for the concerned issue. Based on the Support engagement, We shall update the Post accordingly. Regards, Smarak
... View more
04-08-2022
12:33 AM
Hello @Neil_1992 & @maykiwogno While we wait for our Nifi Guru @MattWho review, Wish to provide a bit of information on the Lucene Exception. It appears Nifi Provenance Repository uses Lucene for indexing & the AlreadyClosedException means the Lucene Core being accessed has been Closed already, owing to FileSystemException with "Too Many Open Files" for the one of the Core Content "/provenance_repo/provenance_repository/lucene-8-index-1647749380623/_vd_Lucene80_0.dvd". Once AlreadyClosedException is reported, Restarting the Lucene Service would ensure the Cores are initialized afresh. Wish to check if your Team have attempted to increase the OpenFileLimit of the User running the Nifi Process to manage the FileSystemException with "Too Many Open Files" & restart Nifi, which I assume would restart the Lucene Cores as well. Note that the above answer is provided from Lucene perspective as I am not a Nifi Expert. My only intention to get your team unblocked, if the issue is preventing any Nifi concerns. Regards, Smarak
... View more
04-08-2022
12:22 AM
Hello @ISC Thanks for using Cloudera Community. Based on the Post, You are experiencing an Error as shared while using Python with Spark. We shall need full trace of the Error along with which Operation cause the same (Even though the Operation is apparent in the trace shared). Along with the full trace, the Client used (Example: pySpark) & the CDP/CDH/HDP release used. The above details would help us review your Issue & proceed accordingly. Regards, Smarak
... View more
04-08-2022
12:17 AM
Hello @AzfarB We hope the above Post has helped answer your concerns & offered an Action Plan to further review. We are marking the Post as Resolved for now. For any concerns, Feel free to post your ask in a Post & we shall get back to you accordingly. Regards, Smarak
... View more
04-06-2022
11:51 PM
Thank You @RangaReddy for this detailed write-up. The level of detailing is awesome 👏
... View more
03-29-2022
01:38 AM
Hello @Suresh_lakavath Since we haven't heard from your side concerning the Post, We are marking the Post as Closed for now. Feel free to Update the Post based on your Team's observation from the Action Plan shared on 03/09. Regards, Smarak
... View more
03-29-2022
01:17 AM
Hello @Moawad Hope you are doing well. Kindly let us know if the Post on 03/20 documenting few Links from CDH v6.x helped your Team. Regards, Smarak
... View more
03-29-2022
01:13 AM
1 Kudo
Hello @dutras As the Issue has been resolved via Support Case, We are marking the Post as resolved. For reference, this Case required Repair & associated Steps, which are verbose to be documented in Community. For anyone facing such issues, Kindly submit a Support Case. Regards, Smarak
... View more
03-29-2022
01:00 AM
Hello @AzfarB Thanks for using Cloudera Community. Based on the Post, Your Team observed Solr-Infra JVM reporting WARNING for Swap Space more than 200MB being utilised. Restarting the Solr-Infra JVM ensured the WARNING went away. Note that Swapping isn't Bad in general & the same has been discussed in detail by Community in [1] & [2]. Plus, Deleting RangerAudits Documents won't affect the same as Solr uses JVM as documented [3]. Indexed Documents aren't persisted in Memory unless Cached, thereby ensuring Deletion won't fix the Swapping guaranteed. As your Screenshot shows, the Host itself is running short on Memory (~99% Utilised) & Overall Swap is ~80% at ~47GB, out of which Solr-Infra is contributing <1GB. As documented in the below Links, Your Team can focus on the Host Level Usage & Considering Increasing the Swap Threshold from 200MB to at least 10% of the Heap i.e. 2GB for a Warning. 01 additional point can be made as to why Solr-Infra Restart helped resolved the WARNING. This needs to be looked at from the Host perspective as to the amount of Memory freed & Whether the Overall Swap Usage reduced at Host Level after Solr-Infra Restart as opposed to Solr-Infra WARNING being suppressed only. Regards, Smarak [1] https://unix.stackexchange.com/questions/2658/why-use-swap-when-there-is-more-than-enough-free-space-in-ram [2] https://chrisdown.name/2018/01/02/in-defence-of-swap.html [3] https://blog.cloudera.com/apache-solr-memory-tuning-for-production/
... View more