Member since
03-06-2020
406
Posts
56
Kudos Received
37
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1089 | 08-29-2025 12:27 AM | |
| 1630 | 11-21-2024 10:40 PM | |
| 1539 | 11-21-2024 10:12 PM | |
| 5281 | 07-23-2024 10:52 PM | |
| 3017 | 05-16-2024 12:27 AM |
04-07-2022
01:48 AM
Hi @swapko , > Please provide the error stacktrace that you are seeing when you refresh the impala tables. > was it working earlier? Regards, Chethan YM
... View more
04-04-2022
03:32 AM
Hi @dmharshit , This error can occures if you have deployed the "Hive Server 2" and "WebHCat" roles under the Hive Service. Can you stop and delete the "Hive Server 2" and "WebHCat" instances from Cloudera Manager -> Hive -> Instances? Restart if any stale configurations are present in the cluster and try the hive query. Regards, Chethan YM
... View more
04-04-2022
03:19 AM
Hi, As per the above Jira- HUE-2782 the resolution is "won't fix" so issue cannot be fixed. We do not know you are facing the same issue or not so please attach the error screenshot and error stacktrcae to look into it. Regards, Chethan YM
... View more
03-23-2022
05:26 AM
1 Kudo
Hi @PratCloudDev , "Spill to disk" happens when there is no enough memory available for a running query, Below is the example. Suppose you are running a query which is using 10gb ( per node peak memory) of memory and in case this query needs 12 gb of memory in this situation spill to disk happen on the configured scratch directories. you can see the directory by searching the "Impala Daemon Scratch Directories" property in the impala configurations. If you do not want to fail the query then you need to make sure the configured scratch directories/disk has enough space to store spilling information, this can potentially be large amounts of data. Check the query profile for "per node peak memory" it is the actual memory used for that query on each daemon, suppose if it is 15GB then set the MEM_LIMIT to 10gb or 12gb to see the spill to disk functionality. To understand why you are seeing the error[1] i need few details from your side. 1. Screenshot of impala admission control pool settings. 2. How much memory you are setting and seeing the below error[1]? 3. Which pool you are using to run the query? 4. If possible you can provide the query profile. Regards, Chethan YM [1]. Rejected query from pool root.centos: minimum memory reservation is greater than memory available to the query for buffer reservations.
... View more
02-28-2022
04:50 AM
1 Kudo
Hi, Similar exception can be seen if we do not add credentials block with in the workflow something like below. Do check you have correct credentials if its a kerborized cluster. <credentials> <credential name="hcatauth_creds" type="hcat"> <property> <name>hcat.metastore.uri</name> <value>thrift://<metastore-fqdn>:9083</value> </property> <property> <name>hcat.metastore.principal</name> <value>hive/_HOST@<REALM></value> </property> </credential> Regards, Chethan YM
... View more
02-04-2022
05:00 AM
Hi, Yes, Impala daemons will use the memory during the execution. Your understanding is correct. In the attached screenshot i can see the corrupted stats for the tables involved in the query, We recommend to run "compute stats" on the tables which is having partial stats and rerun the queries otherwise it will generates bad execution plan and uses more memory than expected. Regards, Chethan YM
... View more
01-25-2022
04:08 AM
1 Kudo
Hi, Please try to rerun the sqoop import outside of oozie and confirm whether its working or not. If it is working then we need to check it from oozie side. We need a complete stacktrcae of the error. Regards, Chethan YM
... View more
01-25-2022
02:51 AM
1 Kudo
Hi, You can restrict the amount of memory Impala reserves during query execution by specifying the -mem_limit option. If you set mem_limit=2gb, The query will not use more than 2gb even if it needs. If you cannot set the memory at the time of execution every-time i think you can create a new resource pool under impala admission control. While creating resource pool you can mention Min and Maximum Query Memory Limit and do not use this resource pool for production queries. set request_pool="pool-name" run the test queries. Regards, Chethan YM
... View more
01-03-2022
04:37 AM
Hi, -> Have you tried the same query via Hive? Does it work? -> How this table got created via hive or impala? -> Try "invalidate metadata" in impala and retry Regards, Chethan YM
... View more
11-04-2021
02:28 AM
Hello @ighack , You are welcome..!! Was your question answered? Please Make sure to mark the answer as the accepted solution. If you find a reply useful, say thanks by clicking on the thumbs up button. Regards, Chethan YM
... View more