执行信息
在SQL调优过程中经常需要执行EXPLAIN ANALYZE或EXPLAIN PERFORMANCE查看SQL语句实际执行信息,通过对比实际执行与优化器的估算之间的差别来为优化提供依据。而EXPLAIN PERFORMANCE相对于EXPLAIN ANALYZE增加了每个DN上的执行信息。
以如下SQL语句在pretty模式下的执行结果为例:
--建表及插入数据。 gaussdb=# CREATE TABLE t1 (c1 int, c2 int); gaussdb=# CREATE TABLE t2 (c1 int, c2 int); gaussdb=# INSERT INTO t1 SELECT generate_series(1,10), generate_series(1,10); gaussdb=# INSERT INTO t2 SELECT generate_series(1,10), generate_series(1,10); gaussdb=# ANALYZE t1; gaussdb=# ANALYZE t2;
执行EXPLAIN PERFORMANCE输出为:
gaussdb=# EXPLAIN PERFORMANCE SELECT sum(t2.c1) FROM t1,t2 WHERE t1.c1 = t2.c2 GROUP BY t1.c2; id | operation | A-time | A-rows | E-rows | E-distinct | Peak Memory | E-memory | A-width | E-width | E-costs ----+-----------------------------------------------+-----------------+--------+--------+------------+----------------+----------+---------+---------+--------- 1 | -> Streaming (type: GATHER) | 64.022 | 10 | 10 | | 82KB | | | 16 | 3.31 2 | -> HashAggregate | [54.464,58.410] | 10 | 10 | | [27KB, 29KB] | 16MB | [20,20] | 16 | 2.84 3 | -> Streaming(type: REDISTRIBUTE) | [54.385,58.318] | 10 | 10 | | [36KB, 36KB] | 2MB | | 8 | 2.76 4 | -> Hash Join (5,7) | [0.471,0.906] | 10 | 10 | 5, 5 | [8KB, 8KB] | 1MB | | 8 | 2.64 5 | -> Streaming(type: REDISTRIBUTE) | [0.053,0.385] | 10 | 10 | | [36KB, 36KB] | 2MB | | 8 | 1.46 6 | -> Seq Scan on public.t2 | [0.014,0.021] | 10 | 10 | | [13KB, 13KB] | 1MB | | 8 | 1.05 7 | -> Hash | [0.055,0.071] | 10 | 10 | | [293KB, 293KB] | 16MB | [24,24] | 8 | 1.05 8 | -> Seq Scan on public.t1 | [0.019,0.024] | 10 | 10 | | [13KB, 13KB] | 1MB | | 8 | 1.05 (8 rows) Predicate Information (identified by plan id) ----------------------------------------------- 4 --Hash Join (5,7) Hash Cond: (t2.c2 = t1.c1) (2 rows) Memory Information (identified by plan id) ----------------------------------------------------------------------------------------------- Coordinator Query Peak Memory: Query Peak Memory: 1MB DataNode Query Peak Memory datanode1 Query Peak Memory: 6MB datanode2 Query Peak Memory: 6MB 1 --Streaming (type: GATHER) Peak Memory: 82KB, Estimate Memory: 64MB 2 --HashAggregate datanode1 Peak Memory: 29KB, Estimate Memory: 16MB, Width: 20 datanode2 Peak Memory: 27KB, Estimate Memory: 16MB, Width: 20 3 --Streaming(type: REDISTRIBUTE) datanode1 Peak Memory: 36KB, Estimate Memory: 2MB datanode2 Peak Memory: 36KB, Estimate Memory: 2MB datanode1 Stream Network: 1kB, Network Poll Time: 0.000; Data Deserialize Time: 0.000 datanode2 Stream Network: 1kB, Network Poll Time: 0.000; Data Deserialize Time: 0.000 4 --Hash Join (5,7) datanode1 Peak Memory: 8KB, Estimate Memory: 1024KB datanode2 Peak Memory: 8KB, Estimate Memory: 1024KB 5 --Streaming(type: REDISTRIBUTE) datanode1 Peak Memory: 36KB, Estimate Memory: 2MB datanode2 Peak Memory: 36KB, Estimate Memory: 2MB datanode1 Stream Network: 1kB, Network Poll Time: 0.000; Data Deserialize Time: 0.000 datanode2 Stream Network: 1kB, Network Poll Time: 0.000; Data Deserialize Time: 0.000 6 --Seq Scan on public.t2 datanode1 Peak Memory: 13KB, Estimate Memory: 1024KB datanode2 Peak Memory: 13KB, Estimate Memory: 1024KB 7 --Hash datanode1 Peak Memory: 293KB, Estimate Memory: 16MB, Width: 24 datanode2 Peak Memory: 293KB, Estimate Memory: 16MB, Width: 24 datanode1 Buckets: 32768 Batches: 1 Memory Usage: 1kB datanode2 Buckets: 32768 Batches: 1 Memory Usage: 1kB 8 --Seq Scan on public.t1 datanode1 Peak Memory: 13KB, Estimate Memory: 1024KB datanode2 Peak Memory: 13KB, Estimate Memory: 1024KB (34 rows) Targetlist Information (identified by plan id) ------------------------------------------------ 1 --Streaming (type: GATHER) Output: (sum(t2.c1)), t1.c2 Node/s: All datanodes 2 --HashAggregate Output: sum(t2.c1), t1.c2 Group By Key: t1.c2 3 --Streaming(type: REDISTRIBUTE) Output: t1.c2, t2.c1 Distribute Key: t1.c2 Spawn on: All datanodes Consumer Nodes: All datanodes 4 --Hash Join (5,7) Output: t1.c2, t2.c1 5 --Streaming(type: REDISTRIBUTE) Output: t2.c1, t2.c2 Distribute Key: t2.c2 Spawn on: All datanodes Consumer Nodes: All datanodes 6 --Seq Scan on public.t2 Output: t2.c1, t2.c2 Distribute Key: t2.c1 7 --Hash Output: t1.c2, t1.c1 8 --Seq Scan on public.t1 Output: t1.c2, t1.c1 Distribute Key: t1.c1 (26 rows) Datanode Information (identified by plan id) ----------------------------------------------------------------------------------------- 1 --Streaming (type: GATHER) (actual time=57.765..64.022 rows=10 loops=1) (Buffers: 0) (CPU: ex c/r=16607284, ex row=10, ex cyc=166072848, inc cyc=166072848) 2 --HashAggregate datanode1 (actual time=54.460..54.464 rows=6 loops=1) datanode2 (actual time=58.407..58.410 rows=4 loops=1) datanode1 (Buffers: 0) datanode2 (Buffers: 0) datanode1 (CPU: ex c/r=34274, ex row=6, ex cyc=205644, inc cyc=141279464) datanode2 (CPU: ex c/r=59030, ex row=4, ex cyc=236120, inc cyc=151512902) 3 --Streaming(type: REDISTRIBUTE) datanode1 (actual time=19.021..54.385 rows=6 loops=1) datanode2 (actual time=14.622..58.318 rows=4 loops=1) datanode1 (Buffers: 0) datanode2 (Buffers: 0) datanode1 (CPU: ex c/r=23512303, ex row=6, ex cyc=141073820, inc cyc=141073820) datanode2 (CPU: ex c/r=37819195, ex row=4, ex cyc=151276782, inc cyc=151276782) 4 --Hash Join (5,7) datanode1 (actual time=0.451..0.471 rows=6 loops=1) datanode2 (actual time=0.869..0.906 rows=4 loops=1) datanode1 (Buffers: shared hit=1) datanode2 (Buffers: shared hit=1) datanode1 (CPU: ex c/r=78319, ex row=12, ex cyc=939828, inc cyc=1220560) datanode2 (CPU: ex c/r=145606, ex row=8, ex cyc=1164852, inc cyc=2347916) 5 --Streaming(type: REDISTRIBUTE) datanode1 (actual time=0.046..0.053 rows=6 loops=1) datanode2 (actual time=0.358..0.385 rows=4 loops=1) datanode1 (Buffers: 0) datanode2 (Buffers: 0) datanode1 (CPU: ex c/r=22796, ex row=6, ex cyc=136780, inc cyc=136780) datanode2 (CPU: ex c/r=249361, ex row=4, ex cyc=997444, inc cyc=997444) 6 --Seq Scan on public.t2 datanode1 (actual time=0.016..0.021 rows=6 loops=1) datanode2 (actual time=0.010..0.014 rows=4 loops=1) datanode1 (Buffers: shared hit=1) datanode2 (Buffers: shared hit=1) datanode1 (CPU: ex c/r=8410, ex row=6, ex cyc=50460, inc cyc=50460) datanode2 (CPU: ex c/r=8769, ex row=4, ex cyc=35076, inc cyc=35076) 7 --Hash datanode1 (actual time=0.055..0.055 rows=6 loops=1) datanode2 (actual time=0.071..0.071 rows=4 loops=1) datanode1 (Buffers: shared hit=1) datanode2 (Buffers: shared hit=1) datanode1 (CPU: ex c/r=16183, ex row=6, ex cyc=97100, inc cyc=143952) datanode2 (CPU: ex c/r=32248, ex row=4, ex cyc=128992, inc cyc=185620) 8 --Seq Scan on public.t1 datanode1 (actual time=0.014..0.019 rows=6 loops=1) datanode2 (actual time=0.018..0.024 rows=4 loops=1) datanode1 (Buffers: shared hit=1) datanode2 (Buffers: shared hit=1) datanode1 (CPU: ex c/r=7808, ex row=6, ex cyc=46852, inc cyc=46852) datanode2 (CPU: ex c/r=14157, ex row=4, ex cyc=56628, inc cyc=56628) (53 rows) User Define Profiling --------------------------------------------------------------------------- Segment Id: 2 Track name: Datanode build connection datanode1 (time=0.117 total_calls=1 loops=1) datanode2 (time=0.161 total_calls=1 loops=1) Plan Node id: 1 Track name: coordinator get datanode connection coordinator1: (time=0.023 total_calls=1 loops=1) Plan Node id: 1 Track name: Coordinator check and update node definition coordinator1: (time=0.001 total_calls=1 loops=1) Plan Node id: 1 Track name: Coordinator serialize plan coordinator1: (time=1.289 total_calls=1 loops=1) Plan Node id: 1 Track name: Coordinator send query id with sync coordinator1: (time=0.091 total_calls=1 loops=1) Plan Node id: 1 Track name: Coordinator send begin command coordinator1: (time=0.002 total_calls=1 loops=1) Plan Node id: 1 Track name: Coordinator start transaction and send query coordinator1: (time=0.044 total_calls=1 loops=1) Plan Node id: 2 Track name: Datanode start up stream thread datanode1 (time=0.172 total_calls=1 loops=1) datanode2 (time=0.173 total_calls=1 loops=1) (18 rows) ====== Query Summary ===== -------------------------------------------------------------------------- Datanode executor start time [datanode1, datanode2]: [0.451 ms,0.475 ms] Datanode executor run time [datanode1, datanode2]: [54.512 ms,58.450 ms] Datanode executor end time [datanode2, datanode1]: [0.098 ms,0.105 ms] Remote query poll time: 0.000 ms, Deserialze time: 0.000 ms System available mem: 8190361KB Query Max mem: 8280473KB Query estimated mem: 6464KB Coordinator executor start time: 0.285 ms Coordinator executor run time: 64.083 ms Coordinator executor end time: 0.117 ms Total network : 1kB Planner runtime: 1.242 ms Plan size: 4251 byte Query Id: 72620543991481051 Total runtime: 64.786 ms (15 rows) --删除表。 gaussdb=# DROP TABLE t1,t2;
上述示例中显示执行信息分为以下6个部分:
- 以表格的形式将计划显示出来,包含以下字段信息,分别是:id、operation、A-time、A-rows、E-rows、E-distinct、Peak Memory、E-memory、A-width、E-width和E-costs。其中计划类字段(id、operation以及E开头字段)的含义与执行EXPLAIN时的含义一致,详见执行计划小节中的说明。A-time、A-rows、E-distinct、Peak Memory、A-width的含义说明如下:
- A-time:当前算子执行完成时间。括号中表示DN完成时间的最小值和最大值。
- A-rows:表示当前算子的实际输出元组数。
- E-distinct:表示hashjoin算子两端的distinct估计值。
- E-memory: 表示估计算子要使用的内存。(其中当query_mem为0时,该行不会显示在结果中)。
- Peak Memory:此算子在执行时使用的内存峰值。
- A-width:表示当前算子每行元组的实际宽度,仅对于重内存使用算子会显示,包括:(Vec)HashJoin、(Vec)HashAgg、(Vec) HashSetOp、(Vec)Sort、(Vec)Materialize算子等,其中(Vec)HashJoin计算的宽度是其右子树算子的宽度,会显示在其右子树上。
- Predicate Information (identified by plan id):
这一部分主要显示的是静态信息,即在整个计划执行过程中不会变的信息,主要是一些join条件和一些filter信息。
其中本例中:
- 4 --Hash Join (5,7):其中4表示id为4的算子,(5,7) 表示id为5的算子和id为7的算子做join。
- Hash Cond: (t2.c2 = t1.c1) 表示hashjoin的条件。
- Memory Information (identified by plan id):
这一部分显示的是整个计划中会将内存的使用情况打印出来的算子的内存使用信息,主要是Hash、Sort算子,包括算子峰值内存(peak memory),控制内存(control memory),估算内存使用(estimate memory),执行时实际宽度(width),内存使用自动扩展次数(auto spread num),是否提前下盘(early spilled),以及下盘信息,包括重复下盘次数(spill Time(s)),内外表下盘分区数(inner/outer partition spill num),下盘文件数(temp file num),下盘数据量及最小和最大分区的下盘数据量(written disk IO [min, max] )。
- Targetlist Information (identified by plan id):
- DataNode Information (identified by plan id):
- User Define Profiling:
- ====== Query Summary =====:
这一部分主要打印总的执行时间和网络流量,包括了初始化和结束阶段的最大最小执行时间,以及当前语句执行时系统可用内存、语句估算内存等信息。