Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
On this page

Description

Updated on 2024-10-14 GMT+08:00

As described in Overview, EXPLAIN displays the execution plan, but will not actually run SQL statements. EXPLAIN ANALYZE and EXPLAIN PERFORMANCE both will actually run SQL statements and return the execution information. This section describes the execution plan and execution information in detail.

Execution Plans

The following SQL statement is used as an example:

1
2
3
4
5
select 
    cjxh, 
    count(1) 
from dwcjk
group by cjxh;

Run the EXPLAIN command and the output is as follows:

Interpretation of the execution plan column (horizontal):

  • id: ID of a node corresponding to each execution operator
  • operation: name of an execution operator

    An operator prefixed with Vector is a vectorized executor operator, usually used in a query containing a column-store table.

    Streaming is a special operator. It implements the core data shuffle function of the distributed architecture. Streaming has three types, which correspond to different data shuffle functions in the distributed architecture:
    • Streaming (type: GATHER): The CN collects data from DNs.
    • Streaming (type: REDISTRIBUTE): Data is redistributed to all the DNs based on selected columns.
    • Streaming (type: BROADCAST): Data on the current DN is broadcast to other DNs.
  • E-rows: number of output rows estimated by each operator
  • E-memory: estimated memory used by each operator on a DN. Only operators executed on DNs are displayed. In certain scenarios, the memory upper limit enclosed in parentheses will be displayed following the estimated memory usage.
  • E-width: estimated width of an output tuple of each operator
  • E-costs: execution cost estimated by each operator
    • E-costs is measured by the optimizer based on an overhead unit. Usually, fetching a disk page is defined as a unit. Other overhead parameters are set based on the unit.
    • The overhead of each node (specified by E-costs) includes the overheads of all its child nodes.
    • Such an overhead reflects only what the optimizer is concerned about, but does not consider the time for transferring result rows to the client. Although the time may play a very important role in the actual total time, it is ignored by the optimizer, because it cannot be changed by modifying the plan.

Interpretation of the execution plan level (vertical):

  1. Layer 1: CStore Scan on dwcjk

    The table scan operator scans the table dwcjk using CStore Scan. At this layer, data in the table dwcjk is read from a buffer or disk, and then transferred to the upper-layer node for calculation.

  2. Layer 2: Vector Hash Aggregate

    Aggregation operator. It is used to perform aggregation (GROUP BY) on the data transferred from the lower layer.

  3. Layer 3: Vector Streaming (type: GATHER)

    GATHER-type Shuffle operator. It is used to aggregate data from DNs to the CN.

  4. Layer 4: Row Adapter

    Storage format conversion operator. It is used to convert memory data from column storage to row storage for client display.

If the operator in the top layer is Data Node Scan, set enable_fast_query_shipping to off to view the detailed execution plan. The following is an example plan.

1
2
3
4
5
6
openGauss=# explain select cjxh, count(1) from dwcjk group by cjxh;        
                    QUERY PLAN                    
--------------------------------------------------
 Data Node Scan  (cost=0.00..0.00 rows=0 width=0)
   Node/s: All datanodes
(2 rows)

After enable_fast_query_shipping is set, the execution plan will be displayed as follows:

Keywords in the execution plan:

  1. Table access modes
    • Seq Scan

      Scans all rows of the table in sequence.

    • Index Scan

      The optimizer uses a two-step plan: the child plan node visits an index to find the locations of rows matching the index condition, and then the upper plan node actually fetches those rows from the table itself. Fetching rows separately is much more expensive than reading them sequentially, but because not all pages of the table have to be visited, this is still cheaper than a sequential scan. The upper-layer planning node sorts index-identified rows based on their physical locations before reading them. This minimizes the independent capturing overhead.

      If there are separate indexes on multiple columns referenced in WHERE, the optimizer might choose to use an AND or OR combination of the indexes. However, this requires the visiting of both indexes, so it is not necessarily a win compared to using just one index and treating the other condition as a filter.

      The following index scans featured with different sorting mechanisms are involved:

      • Bitmap index scan

        Fetches data pages using a bitmap.

      • Index scan using index_name

        Uses simple index search, which fetches data from an index table in the sequence of index keys. This mode is commonly used when only a small amount of data needs to be fetched from a large data table or when the ORDER BY condition is used to match the index sequence to reduce the sorting time.

  2. Table connection modes
    • Nested Loop

      A nested loop is used for queries that have a smaller data set connected. In a nested loop join, the foreign table drives the internal table and each row returned from the foreign table should have a matching row in the internal table. The returned result set of all queries should be less than 10,000. The table that returns a smaller subset will work as a foreign table, and indexes are recommended for connection columns of the internal table.

    • (Sonic) Hash Join

      A hash join is used for large tables. The optimizer uses a hash join, in which rows of one table are entered into an in-memory hash table, after which the other table is scanned and the hash table is probed for matches to each row. Sonic and non-Sonic hash joins differ in their hash table structures, which do not affect the execution result set.

    • Merge Join

      In most cases, the execution performance of a merge join is lower than that of a hash join. However, if the source data has been pre-sorted and no more sorting is needed during the merge join, its performance excels.

  3. Operators
    • sort

      Sorts the result set.

    • filter

      The EXPLAIN output shows the WHERE clause being applied as a Filter condition attached to the Seq Scan plan node. This means that the plan node checks the condition for each row it scans, and returns only the ones that meet the condition. The estimated number of output rows has been reduced because of the WHERE clause. However, the scan will still have to visit all 10,000 rows, as a result, the cost is not decreased. It increases a bit (by 10,000 x cpu_operator_cost) to reflect the extra CPU time spent on checking the WHERE condition.

    • LIMIT

      Limits the number of output execution results. If a LIMIT condition is added, not all rows are retrieved.

Execution Information

In SQL optimization process, you can use EXPLAIN ANALYZE or EXPLAIN PERFORMANCE to check the SQL statement execution information. By comparing estimation differences between actual implementation and the optimizer, basis for service optimization is provided. EXPLAIN PERFORMANCE provides the execution information on each DN, whereas EXPLAIN ANALYZE does not.

The following SQL statement is used as an example:

select count(1) from tb1;

The output of running EXPLAIN PERFORMANCE is as follows:

This figure shows that the execution information can be classified into the following seven aspects.

  1. The plan is displayed as a table, which contains 11 columns: id, operation, A-time, A-rows, E-rows, E-distinct, Peak Memory, E-memory, A-width, E-width, and E-costs. The definition of the plan-type columns (columns started with id, operation, or E) is the same as that of running EXPLAIN. For details, see Execution Plans. The definition of A-time, A-rows, E-distinct, Peak Memory, and A-width are described as follows:
    • A-time: execution completion time of the operator. Generally, A-time of the operator is two values enclosed with square brackets ([]), indicating the shortest time and longest time for completing the operator on all DNs, respectively.
    • A-rows: number of actual output tuples of the operator
    • E-distinct: estimated distinct value of the Hash Join operator
    • Peak Memory: peak memory of the operator on each DN
    • A-width: actual tuple width in each row of the current operator. This parameter is valid only for heavy memory operators, including (Vec)HashJoin, (Vec)HashAgg, (Vec)HashSetOp, (Vec)Sort, and (Vec)Materialize. The (Vec)HashJoin calculation width is the width of its right subtree operator and will be displayed on the right subtree.
  2. Predicate Information (identified by plan id):

    This part displays the static information that does not change in the plan execution process, such as some join conditions and filter information.

  3. Memory Information (identified by plan id):

    This part displays the memory usage information printed by certain operators (mainly Hash and Sort), including peak memory, control memory, operator memory, width, auto spread num, and early spilled; and spill details, including spill Time(s), inner/outer partition spill num, temp file num, spilled data volume, and written disk IO [min, max].

  4. Targetlist Information (identified by plan id):

    This part displays the target columns provided by each operator.

  5. DataNode Information (identified by plan id):

    The execution time, CPU, and buffer usage of each operator are printed in this part.

  6. User Define Profiling:

    This part displays the time when CNs and DNs are connected, the time when DNs are connected, and some execution information at the storage layer.

  7. ====== Query Summary =====:

    The total execution time and network traffic, including the maximum and minimum execution time in the initialization and end phases on each DN, the time in the initialization, execution, and end phases on each CN, the system available memory and statement estimation memory information during the current statement execution, are printed in this part.

NOTICE:
  • The difference between A-rows and E-rows shows the deviation between the optimizer estimation and actual execution. Generally, if the deviation is larger, the plan generated by the optimizer is more improper, and more manual intervention and optimization are required.
  • If the difference of the A-time values is larger, the operator computing skew (difference between execution time on different DNs) is larger, and more manual intervention and optimization are required.
  • Max Query Peak Memory is often used to estimate the consumed memory of SQL statements, and is also used as an important basis for setting a running memory parameter during SQL statement optimization. Generally, the output from EXPLAIN ANALYZE or EXPLAIN PERFORMANCE is provided for the input for further optimization.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback