Member since
03-06-2020
398
Posts
54
Kudos Received
35
Solutions
My Accepted Solutions
Title | Views | Posted |
---|---|---|
136 | 11-21-2024 10:12 PM | |
996 | 07-23-2024 10:52 PM | |
1138 | 05-16-2024 12:27 AM | |
3237 | 05-01-2024 04:50 AM | |
1408 | 03-19-2024 09:23 AM |
03-23-2022
05:26 AM
1 Kudo
Hi @PratCloudDev , "Spill to disk" happens when there is no enough memory available for a running query, Below is the example. Suppose you are running a query which is using 10gb ( per node peak memory) of memory and in case this query needs 12 gb of memory in this situation spill to disk happen on the configured scratch directories. you can see the directory by searching the "Impala Daemon Scratch Directories" property in the impala configurations. If you do not want to fail the query then you need to make sure the configured scratch directories/disk has enough space to store spilling information, this can potentially be large amounts of data. Check the query profile for "per node peak memory" it is the actual memory used for that query on each daemon, suppose if it is 15GB then set the MEM_LIMIT to 10gb or 12gb to see the spill to disk functionality. To understand why you are seeing the error[1] i need few details from your side. 1. Screenshot of impala admission control pool settings. 2. How much memory you are setting and seeing the below error[1]? 3. Which pool you are using to run the query? 4. If possible you can provide the query profile. Regards, Chethan YM [1]. Rejected query from pool root.centos: minimum memory reservation is greater than memory available to the query for buffer reservations.
... View more
02-28-2022
05:25 AM
Hi , Previous execution means previous workflow execution or previous action execution with in a workflow? I think as i know if you have multiple actions with in a wokflow next action will start only if previous action is succeeded otherwise it fails. If its not working like this can you give the info output for that workflow id? oozie job -oozie http://<oozie-server-host>:11000 -info <workflow-id> Regards, Chethan YM
... View more
02-28-2022
04:50 AM
1 Kudo
Hi, Similar exception can be seen if we do not add credentials block with in the workflow something like below. Do check you have correct credentials if its a kerborized cluster. <credentials> <credential name="hcatauth_creds" type="hcat"> <property> <name>hcat.metastore.uri</name> <value>thrift://<metastore-fqdn>:9083</value> </property> <property> <name>hcat.metastore.principal</name> <value>hive/_HOST@<REALM></value> </property> </credential> Regards, Chethan YM
... View more
02-16-2022
06:21 AM
Hi, As i seen till now there is no such way to rerun the job automatically when it is failed, I think you have to rerun it manually as you do always. If you want this type of behaviour you may need to create custom scripts as per your requirements. Regards, Chethan YM
... View more
02-16-2022
05:46 AM
Hi @mala_etl , It looks like a below known issue: https://issues.cloudera.org/browse/HUE-8717 Do work with Cloudera support to get a patch. Regards, Chethan YM
... View more
02-16-2022
05:32 AM
Hi, To look into the issue we need a complete error stack-trace can you please attach it? Also is this happens every-time after impala restart? Regards, Chethan YM
... View more
02-04-2022
05:00 AM
Hi, Yes, Impala daemons will use the memory during the execution. Your understanding is correct. In the attached screenshot i can see the corrupted stats for the tables involved in the query, We recommend to run "compute stats" on the tables which is having partial stats and rerun the queries otherwise it will generates bad execution plan and uses more memory than expected. Regards, Chethan YM
... View more
01-25-2022
04:08 AM
1 Kudo
Hi, Please try to rerun the sqoop import outside of oozie and confirm whether its working or not. If it is working then we need to check it from oozie side. We need a complete stacktrcae of the error. Regards, Chethan YM
... View more
01-25-2022
02:51 AM
1 Kudo
Hi, You can restrict the amount of memory Impala reserves during query execution by specifying the -mem_limit option. If you set mem_limit=2gb, The query will not use more than 2gb even if it needs. If you cannot set the memory at the time of execution every-time i think you can create a new resource pool under impala admission control. While creating resource pool you can mention Min and Maximum Query Memory Limit and do not use this resource pool for production queries. set request_pool="pool-name" run the test queries. Regards, Chethan YM
... View more
01-03-2022
04:37 AM
Hi, -> Have you tried the same query via Hive? Does it work? -> How this table got created via hive or impala? -> Try "invalidate metadata" in impala and retry Regards, Chethan YM
... View more