Description
As described in Overview, EXPLAIN displays the execution plan, but will not actually run SQL statements. EXPLAIN ANALYZE and EXPLAIN PERFORMANCE both will actually run SQL statements and return the execution information. This section describes the execution plan and execution information in detail.
Execution Plans
The following SQL statement is used as an example:
1
|
SELECT * FROM t1, t2 WHERE t1.c1 = t2.c2; |
gaussdb=# EXPLAIN SELECT * FROM t1,t2 WHERE t1.c1 = t2.c2;
QUERY PLAN
-------------------------------------------------------------------
Hash Join (cost=23.73..341.30 rows=16217 width=180)
Hash Cond: (t1.c1 = t2.c2)
-> Seq Scan on t1 (cost=0.00..122.17 rows=5317 width=76)
-> Hash (cost=16.10..16.10 rows=610 width=104)
-> Seq Scan on t2 (cost=0.00..16.10 rows=610 width=104)
(5 rows)
Interpretation of the execution plan level (vertical):
- Layer 1: Seq Scan on t2
The table scan operator scans the table t2 using Seq Scan. At this layer, data in the table t2 is read from a buffer or disk, and then transferred to the upper-layer node for calculation.
- Layer 2: Hash
Hash operator. It is used to calculate the hash value of the operator transferred from the lower layer for subsequent hash join operations.
- Layer 3: Seq Scan on t1
The table scan operator scans the table t1 using Seq Scan. At this layer, data in the table t1 is read from a buffer or disk, and then transferred to the upper-layer node for hash join calculation.
- Layer 4: Hash Join
Join operator. It is used to join data in the t1 and t2 tables using the hash join method and output the result data.
Keywords in the execution plan:
- Table access modes
- Seq Scan
- Index Scan
The optimizer uses a two-step plan: the child plan node visits an index to find the locations of rows matching the index condition, and then the upper plan node actually fetches those rows from the table itself. Fetching rows separately is much more expensive than reading them sequentially, but because not all pages of the table have to be visited, this is still cheaper than a sequential scan. The upper-layer planning node sorts index-identified rows based on their physical locations before reading them. This minimizes the independent capturing overhead.
If there are separate indexes on multiple columns referenced in WHERE, the optimizer might choose to use an AND or OR combination of the indexes. However, this requires the visiting of both indexes, so it is not necessarily a win compared to using just one index and treating the other condition as a filter.
The following index scans featured with different sorting mechanisms are involved:
- Bitmap index scan
- Index scan using index_name
Uses simple index search, which fetches data from an index table in the sequence of index keys. This mode is commonly used when only a small amount of data needs to be fetched from a large data table or when the ORDER BY condition is used to match the index sequence to reduce the sorting time.
- Index-Only Scan
Scans the index that contains all required data, instead of referencing a table.
- Bitmap Heap Scan
Reads pages from bitmaps created by other operations and filters out the rows that do not meet the conditions. Bitmap heap scan can avoid random I/Os and accelerate read speed.
- TID Scan
- Index Ctid Scan
- CTE Scan
Specifies that CTE evaluates subquery operations and stores query results as a temporary table. The temporary table is scanned by the CTE Scan operator.
- Foreign Scan
- Function Scan
Obtains result sets returned by functions and returns them as the rows read from tables.
- Sample Scan
- Subquery Scan
- Values Scan
- WorkTable Scan
Scans a work table. Data is read in the middle of an operation which is usually a recursive operation declared using WITH RECURSIVE.
- Table connection modes
- Nested Loop
A nested loop is used for queries that have a smaller dataset connected. In a nested loop join, the foreign table drives the internal table and each row returned from the foreign table should have a matching row in the internal table. The returned result set of all queries should be less than 10,000. The table that returns a smaller subset will work as a foreign table, and indexes are recommended for connection columns of the internal table.
- (Sonic) Hash Join
A hash join is used for large tables. The optimizer uses a hash join, in which rows of one table are entered into an in-memory hash table, after which the other table is scanned and the hash table is probed for matches to each row. Sonic and non-Sonic hash joins differ in their hash table structures, which do not affect the execution result set.
- Merge Join
In most cases, the execution performance of a merge join is lower than that of a hash join. However, if the source data has been pre-sorted and no more sorting is needed during the merge join, its performance excels.
- Nested Loop
- Operators
- sort
- filter
The EXPLAIN output shows the WHERE clause being applied as a filter condition attached to the Seq Scan plan node. This means that the plan node checks the condition for each row it scans, and returns only the ones that meet the condition. The estimated number of output rows has been reduced because of the WHERE clause. However, the scan will still have to visit all 10,000 rows, as a result, the cost is not decreased. It increases a bit (by 10,000 x cpu_operator_cost) to reflect the extra CPU time spent on checking the WHERE condition.
- LIMIT
Limits the number of output execution results. If a LIMIT condition is added, not all rows are retrieved.
- Append
- Aggregate
Aggregates the results generated from querying rows. It can be an aggregation of statements such as GROUP BY, UNION, and SELECT DISTINCT.
- BitmapAnd
Specifies the AND operation of a bitmap, which is used to form a bitmap that matches more complex conditions.
- BitmapOr
Specifies the OR operation of a bitmap, which is used to form a bitmap that matches more complex conditions.
- Gather
- Group
- GroupAggregate
- Hash
Hashes rows for the parent query. It is usually used to perform the JOIN operation.
- HashAggregate
Aggregates the result rows of GROUP BY by using a hash table.
- Merge Append
Merges subquery results in a way that preserves the sort order. It can be used to merge sorted rows in a table partition.
- ProjectSet
- Recursive Union
Performs a union operation on all steps of a recursive function.
- SetOp
- Unique
-
Specifies a strategy for set operations such as INTERSECT or EXCEPT. It uses Append to avoid pre-sorted input.
- LockRows
Locks problematic rows to prevent other queries from writing, but allows reading.
- Materialize
Stores subquery results in the memory so that the parent query can quickly access and obtain the subquery results.
- Result
Returns a value (such as a hard-coded value) without scanning.
- WindowAgg
Specifies a window aggregate function, which is generally triggered by the OVER statement.
- Merge
- StartWith Operator
Specifies the hierarchical query operator, which is used to perform recursive query operations.
- Rownum
Filters the row number in the query result. It usually appears in the ROWNUM clause.
- Index Cond
- Unpivot
- Partition pruning
- Iterations
Specifies the number of iterations performed by the partition iteration operator on level-1 partitions. If PART is displayed, dynamic pruning is used.
For example, Iterations: 4 indicates that the iteration operator needs to traverse four level-1 partitions. Iterations: PART indicates that the number of level-1 partitions to be traversed is determined by parameter conditions of the partition key.
- Selected Partitions
Specifies the selected level-1 partitions for pruning. m..n indicates that partitions m to n are selected. Multiple inconsecutive partitions are separated by commas (,).
For example, Selected Partitions: 2..4,7 indicates that partitions 2, 3, 4, and 7 are selected.
- Sub Iterations
Specifies the number of iterations performed by the partition iteration operator on level-2 partitions. If PART is displayed, dynamic pruning is used.
For example, Sub Iterations: 4 indicates that the iteration operator needs to traverse four level-2 partitions. Iterations: PART indicates that the number of level-2 partitions to be traversed is determined by parameter conditions of the partition key.
- Selected Subpartitions
Specifies the selected level-2 partitions for pruning, which is in the format like: level-1 partition number:level-2 partition number.
For example, Selected Subpartitions: 2:1 3:2 indicates that level-2 partition 1 of the second level-1 partition and level-2 partition 2 of the third level-1 partition are selected. Selected Subpartitions: ALL indicates that all level-2 partitions are selected.
- Iterations
- Other keywords
Execution Information
The execution result of the following SQL statement in pretty mode is used as an example:
gaussdb=# select sum(t2.c1) from t1,t2 where t1.c1=t2.c2 group by t1.c2;
The output of running EXPLAIN PERFORMANCE is as follows:
gaussdb=# explain performance select sum(t2.c1) from t1,t2 where t1.c1=t2.c2 group by t1.c2;
id | operation | A-time | A-rows | E-rows | E-distinct | Peak Memory | A-width | E-width | E-costs
----+------------------------------------+--------+--------+--------+------------+-------------+---------+---------+------------------
1 | -> HashAggregate | 0.574 | 0 | 200 | | 29KB | | 8 | 396.113..398.113
2 | -> Hash Join (3,4) | 0.358 | 0 | 18915 | 200, 200 | 12KB | | 8 | 53.763..301.538
3 | -> Seq Scan on public.t1 | 0.037 | 1 | 1945 | | 22KB | | 8 | 0.000..29.450
4 | -> Hash | 0.038 | 0 | 1945 | | 264KB | | 8 | 29.450..29.450
5 | -> Seq Scan on public.t2 | 0.029 | 30 | 1945 | | 22KB | | 8 | 0.000..29.450
(5 rows)
Predicate Information (identified by plan id)
-----------------------------------------------
2 --Hash Join (3,4)
Hash Cond: (t1.c1 = t2.c2)
(2 rows)
Memory Information (identified by plan id)
--------------------------------------------------
1 --HashAggregate
Peak Memory: 29KB, Estimate Memory: 64MB
2 --Hash Join (3,4)
Peak Memory: 12KB, Estimate Memory: 64MB
3 --Seq Scan on public.t1
Peak Memory: 22KB, Estimate Memory: 64MB
4 --Hash
Peak Memory: 264KB
Buckets: 32768 Batches: 1 Memory Usage: 0kB
5 --Seq Scan on public.t2
Peak Memory: 22KB, Estimate Memory: 64MB
(11 rows)
Targetlist Information (identified by plan id)
------------------------------------------------
1 --HashAggregate
Output: sum(t2.c1), t1.c2
Group By Key: t1.c2
2 --Hash Join (3,4)
Output: t1.c2, t2.c1
3 --Seq Scan on public.t1
Output: t1.c1, t1.c2, t1.c3
4 --Hash
Output: t2.c1, t2.c2
5 --Seq Scan on public.t2
Output: t2.c1, t2.c2
(11 rows)
Datanode Information (identified by plan id)
----------------------------------------------------------------------------------------------------------
1 --HashAggregate
(actual time=0.574..0.574 rows=0 loops=1)
(Buffers: shared hit=2)
(CPU: ex c/r=0, ex row=0, ex cyc=527797, inc cyc=8385141377087373)
2 --Hash Join (3,4)
(actual time=0.358..0.358 rows=0 loops=1)
(Buffers: shared hit=2)
(CPU: ex c/r=-8385141375712241, ex row=1, ex cyc=-8385141375712241, inc cyc=8385141376559576)
3 --Seq Scan on public.t1
(actual time=0.037..0.037 rows=1 loops=1)
(Buffers: shared hit=1)
(CPU: ex c/r=8385141375728512, ex row=1, ex cyc=8385141375728512, inc cyc=8385141375728512)
4 --Hash
(actual time=0.038..0.038 rows=0 loops=1)
(Buffers: shared hit=1)
(CPU: ex c/r=0, ex row=0, ex cyc=-251554241295571040, inc cyc=8385141376543305)
5 --Seq Scan on public.t2
(actual time=0.019..0.029 rows=30 loops=1)
(Buffers: shared hit=1)
(CPU: ex c/r=8664646089070478, ex row=30, ex cyc=259939382672114336, inc cyc=259939382672114336)
(20 rows)
====== Query Summary =====
----------------------------------------
Datanode executor start time: 0.180 ms
Datanode executor run time: 0.590 ms
Datanode executor end time: 0.051 ms
Planner runtime: 0.366 ms
Query Id: 844424930141239
Total runtime: 0.866 ms
(6 rows)
In the preceding example, the execution information consists of the following parts:
- The plan is displayed as a table, which contains 11 columns: id, operation, A-time, A-rows, E-rows, E-distinct, Peak Memory, E-memory, A-width, E-width, and E-costs. The meanings of the plan-type columns (id, operation, or columns started with E) are the same as those when EXPLAIN is executed. For details, see Execution Plans. The meanings of A-time, A-rows, E-distinct, Peak Memory, and A-width are described as follows:
- A-time: execution completion time of the current operator.
- A-rows: number of actual output tuples of the operator
- E-distinct: estimated distinct value of the Hash Join operator.
- Peak Memory: peak memory used by the operator during execution.
- A-width: actual tuple width in each row of the current operator. This parameter is valid only for heavy memory operators, including (Vec)HashJoin, (Vec)HashAgg, (Vec)HashSetOp, (Vec)Sort, and (Vec)Materialize. The (Vec)HashJoin calculation width is the width of its right subtree operator and will be displayed on the right subtree.
- Predicate Information (identified by plan id):
This part displays the static information that does not change in the plan execution process, such as some join conditions and filter information.
- Memory Information (identified by plan id):
This part displays the memory usage information printed by certain operators (mainly Hash and Sort), including peak memory, control memory, operator memory, width, auto spread num, and early spilled; and spill details, including spill Time(s), inner/outer partition spill num, temp file num, spilled data volume, and written disk IO [min, max].
- Targetlist Information (identified by plan id):
This part displays the target columns provided by each operator.
- DataNode Information (identified by plan id):
This part displays the execution time, CPU, and buffer usage of each operator.
- ====== Query Summary =====:
This part displays the total execution time and network traffic, including the maximum and minimum execution time in the initialization and end phases, available system memory when the current statement is executed, and estimated statement memory.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot