Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Optimizing Statistics

Updated on 2022-08-16 GMT+08:00

What Is Statistic Optimization

GaussDB(DWS) generates optimal execution plans based on the cost estimation. Optimizers need to estimate the number of data rows and the cost based on statistics collected using ANALYZE. Therefore, the statistics is vital for the estimation of the number of rows and cost. Global statistics are collected using ANALYZE: relpages and reltuples in the pg_class table; stadistinct, stanullfrac, stanumbersN, stavaluesN, and histogram_bounds in the pg_statistic table.

Example 1: Poor Query Performance Due to the Lack of Statistics

In most cases, the lack of statistics in tables or columns involved in the query greatly affects the query performance.

The table structure is as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
CREATE TABLE LINEITEM
(
L_ORDERKEY         BIGINT        NOT NULL
, L_PARTKEY        BIGINT        NOT NULL
, L_SUPPKEY        BIGINT        NOT NULL3
, L_LINENUMBER     BIGINT        NOT NULL
, L_QUANTITY       DECIMAL(15,2) NOT NULL
, L_EXTENDEDPRICE  DECIMAL(15,2) NOT NULL
, L_DISCOUNT       DECIMAL(15,2) NOT NULL
, L_TAX            DECIMAL(15,2) NOT NULL
, L_RETURNFLAG     CHAR(1)       NOT NULL
, L_LINESTATUS     CHAR(1)       NOT NULL
, L_SHIPDATE       DATE          NOT NULL
, L_COMMITDATE     DATE          NOT NULL
, L_RECEIPTDATE    DATE          NOT NULL
, L_SHIPINSTRUCT   CHAR(25)      NOT NULL
, L_SHIPMODE       CHAR(10)      NOT NULL
, L_COMMENT        VARCHAR(44)   NOT NULL
) with (orientation = column, COMPRESSION = MIDDLE) distribute by hash(L_ORDERKEY);

CREATE TABLE ORDERS
(
O_ORDERKEY        BIGINT        NOT NULL
, O_CUSTKEY       BIGINT        NOT NULL
, O_ORDERSTATUS   CHAR(1)       NOT NULL
, O_TOTALPRICE    DECIMAL(15,2) NOT NULL
, O_ORDERDATE     DATE NOT NULL
, O_ORDERPRIORITY CHAR(15)      NOT NULL
, O_CLERK         CHAR(15)      NOT NULL
, O_SHIPPRIORITY  BIGINT        NOT NULL
, O_COMMENT       VARCHAR(79)   NOT NULL
)with (orientation = column, COMPRESSION = MIDDLE) distribute by hash(O_ORDERKEY);

The query statements are as follows:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
explain verbose select
count(*) as numwait 
from
lineitem l1,
orders 
where
o_orderkey = l1.l_orderkey
and o_orderstatus = 'F'
and l1.l_receiptdate > l1.l_commitdate
and not exists (
select
*
from
lineitem l3
where
l3.l_orderkey = l1.l_orderkey
and l3.l_suppkey <> l1.l_suppkey
and l3.l_receiptdate > l3.l_commitdate
)
order by
numwait desc;

If such an issue occurs, you can use the following methods to check whether statistics in tables or columns has been collected using ANALYZE.

  1. Execute EXPLAIN VERBOSE to analyze the execution plan and check the warning information:
    WARNING:Statistics in some tables or columns(public.lineitem.l_receiptdate, public.lineitem.l_commitdate, public.lineitem.l_orderkey, public.lineitem.l_suppkey, public.orders.o_orderstatus, public.orders.o_orderkey) are not collected.
    HINT:Do analyze for them in order to generate optimized plan.
  2. Check whether the following information exists in the log file in the pg_log directory. If it does, the poor query performance was caused by the lack of statistics in some tables or columns.
    2017-06-14 17:28:30.336 CST 140644024579856 20971684 [BACKEND] LOG:Statistics in some tables or columns(public.lineitem.l_receiptdate, public.lineitem.l_commitdate, public.lineitem.l_orderkey, public.linei
    tem.l_suppkey, public.orders.o_orderstatus, public.orders.o_orderkey) are not collected.
    2017-06-14 17:28:30.336 CST 140644024579856 20971684 [BACKEND] HINT:Do analyze for them in order to generate optimized plan.

By using any of the preceding methods, you can identify tables or columns whose statistics have not been collected using ANALYZE. You can execute ANALYZE to warnings or tables and columns recorded in logs to resolve the problem.

Example 2: Setting cost_param to Optimize Query Performance

For details, see Case: Configuring cost_param for Better Query Performance.

Example 3: Optimization is Not Accurate When Intermediate Results Exist in the Query Where JOIN Is Used for Multiple Tables

Symptom: Query the personnel who have checked in an Internet cafe within 15 minutes before and after the check-in of a specified person.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
SELECT
C.WBM,
C.DZQH,
C.DZ,
B.ZJHM,
B.SWKSSJ,
B.XWSJ
FROM
b_zyk_wbswxx A,
b_zyk_wbswxx B,
b_zyk_wbcs C
WHERE
A.ZJHM = '522522******3824'
AND A.WBDM = B.WBDM
AND A.WBDM = C.WBDM
AND abs(to_date(A.SWKSSJ,'yyyymmddHH24MISS') - to_date(B.SWKSSJ,'yyyymmddHH24MISS')) < INTERVAL '15 MINUTES'
ORDER BY
B.SWKSSJ,
B.ZJHM
limit 10 offset 0
;

Figure 1 shows the execution plan. This query takes about 12s.

Figure 1 Using an unlogged table (1)

Optimization analysis:

  1. In the execution plan, index scan is used for node scanning, the Join Filter calculation in the external NEST LOOP IN statement consumes most of the query time, and the calculation uses the string addition and subtraction, and unequal-value comparison.
  2. Use an unlogged table to record the Internet access time of the specified person. The start time and end time are processed during data insertion, and this reduces subsequent addition and subtraction operations.
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    //Create a temporary unlogged table.
    CREATE UNLOGGED TABLE temp_tsw
    (
    ZJHM         NVARCHAR2(18),
    WBDM         NVARCHAR2(14),
    SWKSSJ_START NVARCHAR2(14),
    SWKSSJ_END   NVARCHAR2(14),
    WBM          NVARCHAR2(70),
    DZQH         NVARCHAR2(6),
    DZ           NVARCHAR2(70),
    IPDZ         NVARCHAR2(39)
    )
    ;
    //Insert the Internet access record of the specified person, and process the start time and end time.
    INSERT INTO
    temp_tsw
    SELECT
    A.ZJHM,
    A.WBDM,
    to_char((to_date(A.SWKSSJ,'yyyymmddHH24MISS') - INTERVAL '15 MINUTES'),'yyyymmddHH24MISS'),
    to_char((to_date(A.SWKSSJ,'yyyymmddHH24MISS') + INTERVAL '15 MINUTES'),'yyyymmddHH24MISS'),
    B.WBM,B.DZQH,B.DZ,B.IPDZ
    FROM
    b_zyk_wbswxx A,
    b_zyk_wbcs B
    WHERE
    A.ZJHM='522522******3824' AND A.WBDM = B.WBDM
    ;
    
    //Query the personnel who have check in an Internet cafe before and after 15 minutes of the check-in of the specified person. Convert their ID card number format to int8 in comparison.
    SELECT
    A.WBM,
    A.DZQH,
    A.DZ,
    A.IPDZ,
    B.ZJHM,
    B.XM,
    to_date(B.SWKSSJ,'yyyymmddHH24MISS') as SWKSSJ,
    to_date(B.XWSJ,'yyyymmddHH24MISS') as XWSJ,
    B.SWZDH
    FROM temp_tsw A,
    b_zyk_wbswxx B
    WHERE
    A.ZJHM <> B.ZJHM
    AND A.WBDM = B.WBDM
    AND (B.SWKSSJ)::int8 > (A.swkssj_start)::int8
    AND (B.SWKSSJ)::int8 < (A.swkssj_end)::int8
    order by
    B.SWKSSJ,
    B.ZJHM
    limit 10 offset 0
    ;
    

    The query takes about 7s. Figure 2 shows the execution plan.

    Figure 2 Using an unlogged table (2)
  3. In the previous plan, Hash Join has been executed, and a Hash table has been created for the large table b_zyk_wbswxx. The table contains large amounts of data, so the creation takes long time.

    temp_tsw contains only hundreds of records, and an equal-value connection is created between temp_tsw and b_zyk_wbswxx using wbdm (the Internet cafe code). Therefore, if JOIN is changed to NEST LOOP JOIN, index scan can be used for node scanning, and the performance will be boosted.

  4. Execute the following statement to change JOIN to NEST LOOP JOIN.
    1
    SET enable_hashjoin = off;
    

    Figure 3 shows the execution plan. The query takes about 3s.

    Figure 3 Using an unlogged table (3)
  5. Save the query result set in the unlogged table for paging display.

    If paging display needs to be achieved on the upper-layer application page, change the offset value to determine the result set on the target page. In this way, the previous query statement will be executed every time after a page turning operation, which causes long response latency.

    To resolve this problem, you are advised to use the unlogged table to save the result set.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    //Create an unlogged table to save the result set.
    CREATE UNLOGGED TABLE temp_result
    (
    WBM      NVARCHAR2(70),
    DZQH     NVARCHAR2(6),
    DZ       NVARCHAR2(70),
    IPDZ     NVARCHAR2(39),
    ZJHM     NVARCHAR2(18),
    XM       NVARCHAR2(30),
    SWKSSJ   date,
    XWSJ     date,
    SWZDH    NVARCHAR2(32)
    );
    
    //Insert the result set to the unlogged table. The insertion takes about 3s.
    INSERT INTO
    temp_result
    SELECT
    A.WBM,
    A.DZQH,
    A.DZ,
    A.IPDZ,
    B.ZJHM,
    B.XM,
    to_date(B.SWKSSJ,'yyyymmddHH24MISS') as SWKSSJ,
    to_date(B.XWSJ,'yyyymmddHH24MISS') as XWSJ,
    B.SWZDH
    FROM temp_tsw A,
    b_zyk_wbswxx B
    WHERE
    A.ZJHM <> B.ZJHM
    AND A.WBDM = B.WBDM
    AND (B.SWKSSJ)::int8 > (A.swkssj_start)::int8
    AND (B.SWKSSJ)::int8 < (A.swkssj_end)::int8
    ;
    
    //Perform paging query on the result set. The paging query takes about 10 ms.
    SELECT
    *
    FROM
    temp_result
    ORDER BY
    SWKSSJ,
    ZJHM
    LIMIT 10 OFFSET 0;
    
    CAUTION:

    Collecting global statistics using ANALYZE improves query performance.

    If a performance problem occurs, you can use plan hint to adjust the query plan to the previous one. For details, see Hint-based Tuning.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback