Support Questions

Find answers, ask questions, and share your expertise

Very slow CodeGen taking 80% of runtime

avatar
Rising Star

We recently enabled hdfs caching for two tables to try and speed up a whole class of queries that are very similar, generally following this pattern:

 

SELECT x,y,z FROM (

SELECT x,y,z FROM table1 WHERE blah

UNION ALL

SELECT x,y,z FROM table2 WHERE blah

) x
ORDER BY x DESC, y DESC

LIMIT 20001 OFFSET 0


... but we didn't get much runtime improvement. Digging in it looks like 80% of the time is spent on CodeGen: 5.25s, of that CompileTime: 1.67s and OptimizationTime: 3.51s (see profile fragment below for this sample run).

 

With set DISABLE_CODEGEN=true query goes from ~6 seconds to ~1 second, but docs state this should not be used generally, so hesitant to add that in actual live production reports, and would rather want to understand root cause.

 

Both tables are parquet, fully hdfs-cached. Both are wide-ish: 253 and 126 cols respectively, but inner queries project only 20 cols to the outer.

 

CDH 5.13 / Impala 2.10.  Happy to send full profile file by direct mail.

 

Thanks in advance,

 

-mauricio

 


78:MERGING-EXCHANGE 1 5s307ms 5s307ms 73 101 0 0 UNPARTITIONED
49:TOP-N 30 341.689us 880.634us 73 101 873.00 KB 39.28 KB
00:UNION 30 240.707us 3.190ms 73 1.61K 8.81 MB 0

...


F35:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
| Per-Host Resources: mem-estimate=0B mem-reservation=0B
PLAN-ROOT SINK
| mem-estimate=0B mem-reservation=0B
|
78:MERGING-EXCHANGE [UNPARTITIONED]
| order by: action_date DESC, action_id ASC
| limit: 101
| mem-estimate=0B mem-reservation=0B
| tuple-ids=47 row-size=398B cardinality=101
|
F34:PLAN FRAGMENT [RANDOM] hosts=18 instances=18
Per-Host Resources: mem-estimate=206.48MB mem-reservation=14.44MB
49:TOP-N [LIMIT=101]
| order by: action_date DESC, action_id ASC
| mem-estimate=39.28KB mem-reservation=0B
| tuple-ids=47 row-size=398B cardinality=101
|
00:UNION

 ...

 

 

>>>>> F34 Fragment for a sample node (all very similar):
Hdfs split stats (<volume id>:<# splits>/<split lengths>): 8:1/38.32 MB
Filter 4 arrival: 5s339ms
AverageThreadTokens: 1.00
BloomFilterBytes: 3.0 MiB
InactiveTotalTime: 0ns
PeakMemoryUsage: 27.4 MiB
PeakReservation: 14.4 MiB
PeakUsedReservation: 0 B
PerHostPeakMemUsage: 58.6 MiB
RowsProduced: 1
TotalNetworkReceiveTime: 261.74us
TotalNetworkSendTime: 313.68us
TotalStorageWaitTime: 4.96us
TotalThreadsInvoluntaryContextSwitches: 583
TotalThreadsTotalWallClockTime: 5.37s
TotalThreadsSysTime: 53ms
TotalThreadsUserTime: 5.20s
TotalThreadsVoluntaryContextSwitches: 169
TotalTime: 5.43s
>> Fragment Instance Lifecycle Timings (0ns)
>> DataStreamSender (dst_id=78) (1ms)
>> CodeGen (5.25s)
CodegenTime: 26ms
CompileTime: 1.67s <<<<<<<<<<<<< ????
InactiveTotalTime: 0ns
LoadTime: 0ns
ModuleBitcodeSize: 1.9 MiB
NumFunctions: 729
NumInstructions: 35,078
OptimizationTime: 3.51s <<<<<<<<<<<<< ????
PeakMemoryUsage: 17.1 MiB
PrepareTime: 66ms
TotalTime: 5.25s
>> SORT_NODE (id=49) (94ms)
>> UNION_NODE (id=0) (93ms)
>> HASH_JOIN_NODE (id=48) (9ms)

1 ACCEPTED SOLUTION

avatar

@mauriciothanks for the profile. I think you might be better off tweaking DISABLE_CODEGEN_ROWS_THRESHOLD instead of using the big hammer of DISABLE_CODEGEN.

 

The way that option works is that codegen is disabled automatically if the planner detects that no point in the query plan processes that number of rows per backend. The default is 50,000. E.g. if your query scans 100,000 rows split across three backends (33,333 per backend), it will disable codegen automatically.

 

Instead of setting DISABLE_CODEGEN, I'd suggest increasing the value first. Based on the profile you sent me, it looks like something like 400000 might be sufficient for that query at least.

View solution in original post

11 REPLIES 11

avatar
Master Collaborator

Hi @mauricio

I think the solution is sample, you must to SET DISABLE_CODEGEN to true! also it's advised by cloudera documentation!!

source: https://www.cloudera.com/documentation/enterprise/latest/topics/impala_disable_codegen.html

Good luck.

avatar
Rising Star
Thanks, right I know I can do that but I'm hoping to figure out the root cause rather than paper over it. Plus it makes me nervous to do so for a whole class of queries/reports.. that doc page does say "... Do not otherwise run with this setting turned on, because it results in lower overall performance.

avatar

@mauricioI agree it's not great to turn it on globally. I'd be interested in seeing the query profile to understand what happened. We've made some codegen time improvements but there are still remaining issues so would be good to see if it's something we've fixed or not.

avatar
Rising Star

Yeah we definitely wouldn't want to do globally. 

 

We tried to do 

set DISABLE_CODEGEN=true;

right before our sql in the report but driver fails with a 

[Simba][JDBC](11300) A ResultSet was expected but not generated

which is really sad, I had thought we could specify any of these hints right in the sql.  Doing so in the jdbc url is not an option because same connection is shared by all of our thousands of reports, only 10% or so of which are affected by this.

 

@Tim Armstrong I tried to guess your Cloudera email and sent you the profile directly. 

avatar

@mauriciothanks for the profile. I think you might be better off tweaking DISABLE_CODEGEN_ROWS_THRESHOLD instead of using the big hammer of DISABLE_CODEGEN.

 

The way that option works is that codegen is disabled automatically if the planner detects that no point in the query plan processes that number of rows per backend. The default is 50,000. E.g. if your query scans 100,000 rows split across three backends (33,333 per backend), it will disable codegen automatically.

 

Instead of setting DISABLE_CODEGEN, I'd suggest increasing the value first. Based on the profile you sent me, it looks like something like 400000 might be sufficient for that query at least.

avatar
Rising Star

Thanks @Tim Armstrong.  Hmm I can't find that option in the current docs, is it just undocumented? Or do you mean SCAN_NODE_CODEGEN_THRESHOLD ? because there is at least 1 node (from an often used dimension that will apply to most queries) where rows estimate is 2.6 million (though after filtering it becomes only a few).  

 

And also even if all scans are under 400K or whatever we set it to, will it help here considering the slow codegen is in a TOP-N step towards the end?

 

 

Operator #Hosts Avg Time Max Time #Rows Est. #Rows Peak Mem Est. Peak Mem Detail
...
03:SCAN HDFS 30 48.332ms 103.898ms 17 2.60M 10.93 MB 192.00 MB irdw_prod.media_dim md 

 

avatar

The threshold is actually based on the per-host number of rows, so it's 2.6M / 30 = 86K in the example you provided

avatar
Rising Star
Right! OK, will do that then. Thanks Tim.

avatar
Rising Star

FYI @Tim Armstrong : sadly, setting SCAN_NODE_CODEGEN_THRESHOLD, to any value, had no effect, perhaps since as I mentioned above the slow codegen is NOT in a scan node but a TOP-N towards the end of processing.  We are considering setting DISABLE_CODEGEN=false on the url for this connection alone (specific to user reports), though we'd need to watch carefully to make sure it doesn't make other reports slow.

 

We'll probably also open a case with our EDH support to try to get to the bottom of why it's slow to begin with.