Member since
03-06-2020
406
Posts
56
Kudos Received
37
Solutions
My Accepted Solutions
| Title | Views | Posted |
|---|---|---|
| 1120 | 08-29-2025 12:27 AM | |
| 1653 | 11-21-2024 10:40 PM | |
| 1557 | 11-21-2024 10:12 PM | |
| 5333 | 07-23-2024 10:52 PM | |
| 3029 | 05-16-2024 12:27 AM |
04-07-2022
01:48 AM
Hi @swapko , > Please provide the error stacktrace that you are seeing when you refresh the impala tables. > was it working earlier? Regards, Chethan YM
... View more
04-04-2022
06:48 AM
@hbenner89 ,
In addition to what @ChethanYM wrote above, you should also share the file size that you are attempting to upload.
As a general matter, you can't expect a web browser to enable you to upload a file with arbitrarily large size, so the perhaps unstated reason the Jira issue you pointed to was resolved with the status "won't fix" is because this is not a limitation that is specific to Hue.
... View more
03-23-2022
05:26 AM
1 Kudo
Hi @PratCloudDev , "Spill to disk" happens when there is no enough memory available for a running query, Below is the example. Suppose you are running a query which is using 10gb ( per node peak memory) of memory and in case this query needs 12 gb of memory in this situation spill to disk happen on the configured scratch directories. you can see the directory by searching the "Impala Daemon Scratch Directories" property in the impala configurations. If you do not want to fail the query then you need to make sure the configured scratch directories/disk has enough space to store spilling information, this can potentially be large amounts of data. Check the query profile for "per node peak memory" it is the actual memory used for that query on each daemon, suppose if it is 15GB then set the MEM_LIMIT to 10gb or 12gb to see the spill to disk functionality. To understand why you are seeing the error[1] i need few details from your side. 1. Screenshot of impala admission control pool settings. 2. How much memory you are setting and seeing the below error[1]? 3. Which pool you are using to run the query? 4. If possible you can provide the query profile. Regards, Chethan YM [1]. Rejected query from pool root.centos: minimum memory reservation is greater than memory available to the query for buffer reservations.
... View more
03-20-2022
03:00 AM
Yes before I tried runing sqoop import manually, I restarted the oozie sqoop service multiple times until I can see the oozie workflow was initialized in the logs. The workflows started working as expected and the job completed.
... View more
02-28-2022
07:21 AM
Hello Chetan, It seems related to the missing hive-site.xml on /user/oozie/share/lib/lib../ path. After creation of oozie share libs, it maintein all jars but not the other files, like xml.. Can you confirm this behaviour? Thanks BR
... View more
02-04-2022
05:00 AM
Hi, Yes, Impala daemons will use the memory during the execution. Your understanding is correct. In the attached screenshot i can see the corrupted stats for the tables involved in the query, We recommend to run "compute stats" on the tables which is having partial stats and rerun the queries otherwise it will generates bad execution plan and uses more memory than expected. Regards, Chethan YM
... View more
02-03-2022
04:07 AM
Hi Could you elaborate on how to write the queries with idle_session_timeout value? I am facing the same error? In the GitHub https://github.com/cloudera/impyla/issues/278, it says there is not way we can set it up in the connection and would need to put it in query. Thanks.
... View more
01-03-2022
04:37 AM
Hi, -> Have you tried the same query via Hive? Does it work? -> How this table got created via hive or impala? -> Try "invalidate metadata" in impala and retry Regards, Chethan YM
... View more
12-21-2021
09:51 AM
UPDATE: One possible workaround to suppress these quotes from displaying in select * is to create a view like below in Impala: CREATE VIEW db1.view1 AS SELECT replace(table1.quotedcol1, '"', '') quotedcol1, replace(table1.quotedcol2, '"', '') quotedcol2 FROM db1.table1;
... View more
11-08-2021
05:29 AM
@ighack Did you resolved your issue? If so, please mark the appropriate reply as the solution, as it will make it easier for others to find the answer in the future.
... View more