Member since
01-16-2018
613
Posts
48
Kudos Received
109
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1443 | 04-08-2025 06:48 AM | |
| 1714 | 04-01-2025 07:20 AM | |
| 1714 | 04-01-2025 07:15 AM | |
| 1358 | 05-06-2024 06:09 AM | |
| 2085 | 05-06-2024 06:00 AM |
03-08-2022
01:37 AM
Hello @corestack We hope the Post by @Azhar_Shaikh pointing to Link [1] helps your Team as there has been no further response from your side. As such, We shall mark the Post as Resolved. Feel free to share any concerns with your Team's CDP Adoption via a Post in Community & We shall help your Team. Regards, Smarak [1] https://community.cloudera.com/t5/Community-Articles/How-to-configure-Single-Sign-On-SSO-for-CDP-Public-Cloud-the/ta-p/300222
... View more
03-08-2022
01:34 AM
Hello @nikrahu We believe the Post by @ggangadharan answer your Queries. As such, We shall mark the Post as Resolved. If you have any concerns, Feel free to engage Cloudera Community via a Post. Thanks @ggangadharan for the detailed Examples !!! Regards, Smarak
... View more
03-07-2022
11:54 PM
Hello @OmarElSeihy Thanks for using Cloudera Community. Based on the Post, your Team is receiving a Warning [1]. By default, the Role uses "/tmp" for -XX:HeapDumpPath. Your Team need to change the Value set for -XX:HeapDumpPath to a Mount, wherein your Team have Sufficient Space. Alternatively, Your Team can work on reducing the FileSystem Usage on the concerned Mount Point. This Warning is an Alert for Cluster Admin & has no impact on any Service Usage in the Cluster. Sharing a Screenshot showing the Config Changes required, if your Team wish to modify the HeapDump Path & the Warning Threshold (Example shown for Solr Service as used Solr in the Post Tags): Regards, Smarak [1] This role's Heap Dump Directory is on a filesystem with less than 10.0 GiB of its space free. /tmp
... View more
03-07-2022
11:46 PM
Hello @Almolki Thanks for using Cloudera Community. Based on the Post, Your Team is experiencing [1] on Solr 8.4.1. The Issue being faced by your Team is a Known Limitation of Solr, wherein each Shard can host 2B Documents Maximum [2]. Your Team have following Choices: Use SplitShard to split the Shard with 2B Documents into 2 Daughter Shards. Refer [3] for using SplitShard API. If your Team have the Source Data, Create a Collection afresh with Higher Shards & Reindex the Source Data afresh. Kindly review & let us know if your Queries have been addressed. Regards, Smarak [1] Caused by: java.lang.IllegalArgumentException: number of documents in the index cannot exceed 2147483519 [2] https://issues.apache.org/jira/browse/SOLR-3504 [3] https://solr.apache.org/guide/8_4/shard-management.html#shard-management
... View more
03-07-2022
01:37 AM
Hello @ganeshkumarj Thanks for using Cloudera Community. Based on the Post, You are migrating from Cloudera Search (CDH 5.9.3) to Standalone Solr (Apache v4.10.3). As your Team mentioned, the Error points to Index (Copied manually) being on a Lucene Version higher than anticipated [1]. Your Team can confirm the LuceneVersion via "solrconfig.xml" for the Collection "sample_collection" on CDH. If LuceneVersion Match isn't feasible, ReIndexing is the Only Way forward. Yet, there are few things wherein our help in this Post would be limited: (I) CDH v5.9.3 is EoS since a long time. Internally, We have extremely limited Setup for checking further on your Team's concerns. (II) Your Team is implementing the Migration on Standalone Solr (Apache v4.10.3). Cloudera Product Offering package Solr into Search (In CDH) & Solr (In CDP). Unfortunately, We have limited input on any Open Source Implementation outside of Cloudera Product. Our Team would be happy to assist your Team to Migrate from CDH v5.9.3 to CDP, if required by your Team. We have Documentation (Which are Tested internally) to migrate from CDH Search to CDP Solr & your Team would get the Support assistance in any issues as well. Regards, Smarak [1] https://lucene.apache.org/core/7_1_0/core/org/apache/lucene/index/IndexFormatTooNewException.html
... View more
03-04-2022
06:44 AM
Greetings @Sayed016 Thanks for using Cloudera Community. HBase has Client Scan Timeout “hbase.client.scanner.timeout.period” (60 Seconds Default) along with Server RPC Timeout "hbase.rpc.timeout" (60 Seconds Default). I believe the Timeout being experienced by your Team is 1st one. Kindly set the 2 Parameters to 90 Seconds & ensure, Hive Client (Wherein your Team is accessing the Hive-On-Hbase Tables) is picking the Updated HBase Configurations. Regards, Smarak
... View more
12-07-2021
09:58 PM
Hello @AWlodarczyk Thanks for using Cloudera Community. The Link [1] covers the Minimum Requirement for both rows in their respective families. In other words, Azul JDK8 Family (8.56.0.21 & Above) & Azul JDK11 Family (11.50.19 & Above). They don't apply for Azul JDK13, Azul JDK15 & Azul JDK17 Family [2] for now. In [3], Your Team shall observe the JDK & the respective JDK Family Version being referred. Let us know if the above Post answer your queries. Regards, Smarak [1] https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/release-guide/topics/cdpdc-java-requirements.html? [2] https://www.azul.com/downloads/?package=jdk [3] https://supportmatrix.cloudera.com/
... View more
12-07-2021
01:21 PM
Hello @lbourgeois Apology for the delayed response & Thank You for the details. Internally, I could generate " HTTP ERROR 403 Forbidden", if I remove DEAdmin & DEUser from the Environment associated with the CDE Service for the User, for which the [User]:[Pass] is being passed. Once the above Privilege were added back to the User at Environment Level & "Synchronize Users" Operation Completed successfully, the Token was available (Wait for ~5 Minutes before retrying the Curl Command). Kindly review & let us know if the above Step works for you. Regards, Smarak
... View more
12-07-2021
12:59 PM
Hello @rootuser, Thanks for using Cloudera Community. Based on the Post, You are trying to use CopyTable to copy HBase Table(s) from 1 Cluster to another Cluster, wherein 1 Mapper is being observed. Please confirm if the Source Table has 1 Region only. Additionally, Confirm if CopyTable on a Table with >1 Regions (Say, 5 Regions) creates 1 Mapper or 5 Mappers. Also, Please state the HBase Version being used by your Team. Additionally, Share the Timeout being observed by your Team. As far as I recall, HBase uses 1 Mapper per Region. As such, It's likely the Source Table has 1 Region only. In such case, Increasing the Region Split by Pre-Split or Increasing the Timeout should help. Regards, Smarak
... View more
12-01-2021
12:46 AM
Hello @RafaelDiaz We hope the Post was helpful for you & marking the same as Resolved. If your Team continue to face the issue with HiveAccessControlException, Do update the Post & we can check accordingly. Regards, Smarak
... View more