Did you use Hue to query Impala? Did you actually experience query took 4.8 hours to return the result, or did you just noticed the start and end time from query profile that is long?
Hue does not close query handler after query finishes so that users can still retrieve result later on. This time while query handler is open until closed will be counted towards the query start and end time in the profile.
It's a bit late in the game but I'm running into the same problem where the query appears to be running for hours and the first row fetched in in seconds. This means it is not actually running although the list of Impala queries says it is. As previously stated by a poster, the user did not close the session. I have just noticed that the last query executed will hold the query in a running state. Once another query is executed or the session closed, it will release the resources and mark the query as finished.
I have another post about this same issue. Neither one of these 2 parameters have helped:
@pollardthe documentation is accurate, many people use those flags successfully. I wouldn't want to speculate about what's happening in your case. If you include a query profile that can help to diagnose.
We've seen things like this happen when there's a client polling the query for status and keeping it alive (the timeout is since the last time the client performed an operation on the query or session).
This would be a good time to mention were are on this version of CM: CDH-5.14.2-1.cdh5.14.2.p0.3
Are you telling me that people that are executing a query in Hue, monitor the query in CM ---> Impala ---> Queries and after the idle timeouts are up, then the query shows it is not running anymore?
Execute a query in Hue like a count on a table and monitor the status of the Impala query in CM. Let me know if it actually times out.
I discovered that the last statement executed in Hue will stay there forever or until you close the session or execute another statement. So, either of these parameters does not stop the session after the timeout period as they say they are supposed to do:
Look at the attachment (apparently I can't past images in as I exceeded 100,000 characters???
This is actually how fast the results were returned and the query finished:
First row fetched: 838ms (743ms)
I assume you have a test bed you can set the setting to a much lower number to see if it is still showing as running in the Impala query monitor tool in CM. We are having sporadic resource issues because these resources are not being released. The more people that run a query and don't purposely close out the session, the more resources are being consumed.
So, all the parameters I mentioned before don't help a bit? It's all up to these 2 parameters? Maybe they are not being loaded??? I was looking for a way to hit the site and get all current parameter settings??? I know Hue does not have an API, but is there a way to see if it actually is loading those parameters?
I apologize for missing your request to get the query profile. I can send you a new one as this is easily duplicatable. I have already re-executed a query to see if the profile was that different...
Now, I re-executed the same query which is a count and returns 1 row. This query will show in the Impala Query monitor screen as Executing indefinitely. I can close my session or re-execute another query and it will mark it as Finished.
Here's the same one 45 minutes into waiting on it too close. It will not close unless I do something to make it close. I waited over 2 hours the last time I tried and it still said it was Executing:
select count(*) from web_logs
Query Info Query ID: 184e512db38f7215:e361d04d00000000 User: <user> Database: default Coordinator: <host> Query Type: QUERY Query State: FINISHED Start Time: Aug 16, 2019 1:24:05 PM Duration: 47m, 43s Rows Produced: 1 Admission Result: Admitted immediately Admission Wait Time: 0ms Bytes Streamed: 64 B Client Fetch Wait Time: 6.59s Client Fetch Wait Time Percentage: 0 Connected User: hue/<host>@<domain> Delegated User: <user> Estimated per Node Peak Memory: 52.0 MiB File Formats: TEXT/NONE HDFS Average Scan Range: 101.1 KiB HDFS Bytes Read: 404.5 KiB HDFS Scanner Average Read Throughput: 1.4 GiB/s Impala Version: impalad version 2.11.0-cdh5.14.2 RELEASE (build ed85dce709da9557aeb28be89e8044947708876c) Network Address: <IP>:<Port> Node with Peak Memory Usage: <server>:<port> Out of Memory: false Per Node Peak Memory Usage: 196.4 KiB Planning Wait Time: 18ms Planning Wait Time Percentage: 0 Pool: root.<user> Query Status: OK Session ID: 7744f8b1bd92eb67:c892fbb3fbb81293 Session Type: HIVESERVER2 Statistics Corrupt: false Statistics Missing: true Threads: CPU Time: 75ms Threads: CPU Time Percentage: 50 Threads: Network Receive Wait Time: 32ms Threads: Network Receive Wait Time Percentage: 22 Threads: Network Send Wait Time: 1ms Threads: Network Send Wait Time Percentage: 1 Threads: Storage Wait Time: 40ms Threads: Storage Wait Time Percentage: 27 Threads: Total Time: 150ms Download Profile...
Query Timeline Query submitted: 49.51us (49.51us) Planning finished: 18ms (18ms) Submit for admission: 19ms (733.33us) Completed admission: 19ms (134.25us) Ready to start on 4 backends: 19ms (521.71us) All 4 execution backends (5 fragment instances) started: 22ms (2ms) Rows available: 72ms (50ms) First row fetched: 6.66s (6.59s)
Planner Timeline Analysis finished: 16ms (16ms) Value transfer graph computed: 16ms (27.31us) Single node plan created: 16ms (385.49us) Runtime filters computed: 16ms (54.11us) Distributed plan created: 16ms (24.85us) Lineage info computed: 16ms (26.54us) Planning finished: 17ms (467.37us)
After the IMPALA-1575 fixes (https://issues.apache.org/jira/browse/IMPALA-1575), which are in CDH5.14, resources will be released once the last row is fetched or the query is cancelled. It looks like that isn't happening for some reason here - either the query is just taking a while to compute the count or the client is slow to fetch the results (can't tell from the profile fragment). "Released admission control resources" will show up in the query timeline when the resources are released. After that point it shouldn't hold up other queries getting admitted.
Side-note - there's a monitoring issue here where the query does show as executing until the client closes the query, even though it isn't holding onto significant resources. Hue keeps the queries open in case the user reloads the page and it wants to re-fetch the results. We fixed this in CDH6.2 with https://issues.apache.org/jira/browse/IMPALA-5397.
That profile is confusing me a bit. count(*) only returns one row so I would think that it would return quickly after the first row was fetched (one quirk of the "first row fetched" event is that it tracks when the row was requested, not when it was returned).
The best theory I have based on the profile fragment is that the count(*) hasn't actually been returned yet and Hue is blocked waiting to fetch that row, either cause it's being computed or because something is hung. The full profile might help here. But it seems something slightly odd is happening.
It's hard to believe a count of 1000 records is taking 2.2 hours. So, I closed the session and did not see the mentioned "Released admission control resources" value.
It would be very helpful to know why this did not show up and what needs to be done to release the resources...
BTW, it's not holding up queries from getting admitted as far as I can tell. We ran into a problem where it did not have enough memory to allocate to a query and returned an error. That's what got me started down this road in the first place.
When you say "after the first row was fetched" what constitutes a fetch? When I execute a query, I get returned results and I see the records. I assume the fetch is automatic up to a certain number as it will page the rest of them. To me, when I see records, a fetch has taken place???
This makes zero sense:
"one quirk of the "first row fetched" event is that it tracks when the row was requested, not when it was returned)"
as the very word fetch means it got a result and pulled it back for viewing. Why would the word "fetch" be used in place of the word "requested?"
I get a result back of 1000. There is no more to fetch from the query. That's it. It appears that the fetch occurred, I got my result and I'm completely done with the query. That's when the resources should be released.
It sounds like you're saying the interface in Impala is showing incorrectly and has been fixed? This would be the whole problem, correct?
> It's hard to believe a count of 1000 records is taking 2.2 hours. So, I closed the session and did not see the mentioned "Released admission control resources" value.
Yeah I agree, something is weird here. We've seen symptoms like this if there were dropped connections in the network layer causing hangs or similar
> BTW, it's not holding up queries from getting admitted as far as I can tell. We ran into a problem where it did not have enough memory to allocate to a query and returned an error. That's what got me started down this road in the first place.
Thanks for clarifying, that makes sense. Your cluster does sound unhappy, it does sound an awful lot like some fragments of the query have gotten stuck. We've seen this happen because of issues communicating with HDFS before (e.g. because of a heavily loaded namenode), and we've also seen hangs in the JVM - https://issues.apache.org/jira/browse/IMPALA-7482.
If it's a JVM issue, we've seen in some cases that increasing heap sizes helps. Setting ipc.client.rpc-timeout.ms to 60000 (i.e. 60 seconds) under CM > Impala > Configuration > Impala Daemon HDFS Advanced Configuration Snippet (Safety Valve) might help if it's a namenode issue. We've also seen the file handle cache that got enabled by default in CDH5.15 help a lot in reducing namenode load, we know that some customers upgraded and saw some pretty dramatic improvements from that (and also various other improvements in that release). We've done a lot of stuff in this space in general over the last year or two so I wouldn't be surprised if an upgrade fixed things, even without knowing exactly what you're running into.
> as the very word fetch means it got a result and pulled it back for viewing. Why would the word "fetch" be used in place of the word "requested?"
I agree 100%. I think whoever named it was either overly optimistic and assumed there wouldn't be a significant gap in time, or it was named from the point of view of the code rather than the external system.
> If it's a JVM issue, we've seen in some cases that increasing heap sizes helps. Setting ipc.client.rpc-timeout.ms to 60000
I'd say it's more like a set of overloaded namenodes. And, according to the research I've done so far, for another problem, I have a theory that I can use a more up to date GC and it should increase the performance and reduce the number of Zookeeper failures we have. Our ZK failures are sporadic and happen every 2 to 3 days. Sometimes more sometimes less. Moving ZK to separate nodes is not an option at this point and I'm doing all I can to try to reduce the number of failures short of moving the service.
I'll check our settings on this and see if we can do one or both. I suspect we have increased JVM heap already, but not sure?
>We've also seen the file handle cache that got enabled by default in CDH5.15 help a lot in reducing namenode load,
I assume this is available before this version but was not enabled by default??? I'll look it up and see...
> I agree 100%. I think whoever named it was either overly optimistic and assumed there wouldn't be a significant gap in time, or it was named from the point of view of the code rather than the external system
So, my question is, is there an indicator in the Query Details that indicates something was returned? I know I get an initial set of results back. Without that "fetch" metric meaning what the word actually says, I don't know what indicates how long it took to get the first set of records back???
Back to the original issue...
Given that the issue appears to be the last query issued in Hue tends to show up as still executing 2.2 hours later and has already returned a count almost immediately. Obviously, the parameters for idle timeouts for sessions and queries is not marking the session as closed. Therefore appearing to still be executing:
Is this causing resource issues because the session is being held open and appearing to be still executing? I would assume so as it is waiting on a fetch of subsequent sets of records???
What parameter(s) will close the session from the last query executed?
Just to let you know, I've come in late to the game and am still learning CM and Cloudera Manager. I understand a lot but with 1000s of parameters for all the apps and an ample supply of problems, it'll take a while. 🙂
Thanks for all your help. It is nice to have a response on this forum. The last couple posts were not very useful. We do have a service contract and although I am one of the 3 admins, they are working on adding me to the support contract so I can put in tickets and get support directly.