Member since
01-16-2018
613
Posts
48
Kudos Received
109
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 784 | 04-08-2025 06:48 AM | |
| 965 | 04-01-2025 07:20 AM | |
| 916 | 04-01-2025 07:15 AM | |
| 964 | 05-06-2024 06:09 AM | |
| 1506 | 05-06-2024 06:00 AM |
03-07-2022
11:46 PM
Hello @Almolki Thanks for using Cloudera Community. Based on the Post, Your Team is experiencing [1] on Solr 8.4.1. The Issue being faced by your Team is a Known Limitation of Solr, wherein each Shard can host 2B Documents Maximum [2]. Your Team have following Choices: Use SplitShard to split the Shard with 2B Documents into 2 Daughter Shards. Refer [3] for using SplitShard API. If your Team have the Source Data, Create a Collection afresh with Higher Shards & Reindex the Source Data afresh. Kindly review & let us know if your Queries have been addressed. Regards, Smarak [1] Caused by: java.lang.IllegalArgumentException: number of documents in the index cannot exceed 2147483519 [2] https://issues.apache.org/jira/browse/SOLR-3504 [3] https://solr.apache.org/guide/8_4/shard-management.html#shard-management
... View more
03-07-2022
01:37 AM
Hello @ganeshkumarj Thanks for using Cloudera Community. Based on the Post, You are migrating from Cloudera Search (CDH 5.9.3) to Standalone Solr (Apache v4.10.3). As your Team mentioned, the Error points to Index (Copied manually) being on a Lucene Version higher than anticipated [1]. Your Team can confirm the LuceneVersion via "solrconfig.xml" for the Collection "sample_collection" on CDH. If LuceneVersion Match isn't feasible, ReIndexing is the Only Way forward. Yet, there are few things wherein our help in this Post would be limited: (I) CDH v5.9.3 is EoS since a long time. Internally, We have extremely limited Setup for checking further on your Team's concerns. (II) Your Team is implementing the Migration on Standalone Solr (Apache v4.10.3). Cloudera Product Offering package Solr into Search (In CDH) & Solr (In CDP). Unfortunately, We have limited input on any Open Source Implementation outside of Cloudera Product. Our Team would be happy to assist your Team to Migrate from CDH v5.9.3 to CDP, if required by your Team. We have Documentation (Which are Tested internally) to migrate from CDH Search to CDP Solr & your Team would get the Support assistance in any issues as well. Regards, Smarak [1] https://lucene.apache.org/core/7_1_0/core/org/apache/lucene/index/IndexFormatTooNewException.html
... View more
03-04-2022
06:44 AM
Greetings @Sayed016 Thanks for using Cloudera Community. HBase has Client Scan Timeout “hbase.client.scanner.timeout.period” (60 Seconds Default) along with Server RPC Timeout "hbase.rpc.timeout" (60 Seconds Default). I believe the Timeout being experienced by your Team is 1st one. Kindly set the 2 Parameters to 90 Seconds & ensure, Hive Client (Wherein your Team is accessing the Hive-On-Hbase Tables) is picking the Updated HBase Configurations. Regards, Smarak
... View more
12-07-2021
09:58 PM
Hello @AWlodarczyk Thanks for using Cloudera Community. The Link [1] covers the Minimum Requirement for both rows in their respective families. In other words, Azul JDK8 Family (8.56.0.21 & Above) & Azul JDK11 Family (11.50.19 & Above). They don't apply for Azul JDK13, Azul JDK15 & Azul JDK17 Family [2] for now. In [3], Your Team shall observe the JDK & the respective JDK Family Version being referred. Let us know if the above Post answer your queries. Regards, Smarak [1] https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/release-guide/topics/cdpdc-java-requirements.html? [2] https://www.azul.com/downloads/?package=jdk [3] https://supportmatrix.cloudera.com/
... View more
12-07-2021
01:21 PM
Hello @lbourgeois Apology for the delayed response & Thank You for the details. Internally, I could generate " HTTP ERROR 403 Forbidden", if I remove DEAdmin & DEUser from the Environment associated with the CDE Service for the User, for which the [User]:[Pass] is being passed. Once the above Privilege were added back to the User at Environment Level & "Synchronize Users" Operation Completed successfully, the Token was available (Wait for ~5 Minutes before retrying the Curl Command). Kindly review & let us know if the above Step works for you. Regards, Smarak
... View more
12-07-2021
12:59 PM
Hello @rootuser, Thanks for using Cloudera Community. Based on the Post, You are trying to use CopyTable to copy HBase Table(s) from 1 Cluster to another Cluster, wherein 1 Mapper is being observed. Please confirm if the Source Table has 1 Region only. Additionally, Confirm if CopyTable on a Table with >1 Regions (Say, 5 Regions) creates 1 Mapper or 5 Mappers. Also, Please state the HBase Version being used by your Team. Additionally, Share the Timeout being observed by your Team. As far as I recall, HBase uses 1 Mapper per Region. As such, It's likely the Source Table has 1 Region only. In such case, Increasing the Region Split by Pre-Split or Increasing the Timeout should help. Regards, Smarak
... View more
12-01-2021
12:46 AM
Hello @RafaelDiaz We hope the Post was helpful for you & marking the same as Resolved. If your Team continue to face the issue with HiveAccessControlException, Do update the Post & we can check accordingly. Regards, Smarak
... View more
11-30-2021
08:19 AM
Hello @lbourgeois Thanks for using Cloudera Community. Based on the Post, you are using to get CDE API Access Token & the Command just hangs. In short, You entered the Workload Password after using your Environment CDE Base URL followed by the KnoxToken Endpoint: ### curl -u <Your-Workload-User> <Your-CDE-Base-URL>/gateway/authtkn/knoxtoken/api/v1/token Kindly confirm if the behavior is Consistent across all CDE Services & all Users. Whether the DataLake (FreeIPA & IDBroker) is Up & Running. Additionally, Confirm the CDE Version being used. Regards, Smarak [1] https://docs.cloudera.com/data-engineering/cloud/api-access/topics/cde-api-get-access-token.html
... View more
11-16-2021
10:36 PM
Hello @xgxshtc Thanks for the Update. If you try to access the WAL File (For which the DFSOutputStream reports Premature EOF) via "hdfs dfs -cat/-head", Is the Command running successfully ? The FSCK Output can be modified to include "-openforwrite" to show the details of the 10 Files currently opened. Regards, Smarak
... View more
11-15-2021
12:05 AM
Hello @xgxshtc Thanks for the Update. By Sideline, We meant ensuring the HBase WAL Directory "/hbase/WALs" is Empty. Let us know if the below Steps helps: Stop HBase RegionServers, Sideline the WAL Directory Contents i.e. There shouldn't be any Directories with "/hbase/WALs", Restart HBase RegionServers. Additionally, You reinstalled the Cluster & yet observed the concerned Issue again. This likely indicates the HDFS State may be Unhealthy. Any Chance you can review the HDFS FSCK on the HBase WAL Directory [1] to confirm whether the Blocks associated with the HBase WAL Files are Healthy. Regards, Smarak [1] https://hadoop.apache.org/docs/r1.2.1/commands_manual.html#fsck
... View more