Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
On this page

Optimizing SQL Self-Diagnosis

Updated on 2024-10-14 GMT+08:00

Performance issues may occur when you query data or run the INSERT, DELETE, UPDATE, or CREATE TABLE AS statement. In this case, you can query the warning column in the GS_WLM_SESSION_STATISTICS, GS_WLM_SESSION_HISTORY, and GS_WLM_SESSION_QUERY_INFO_ALL views to obtain reference for performance optimization.

Alarms that can trigger SQL self diagnosis depend on the settings of resource_track_level. If resource_track_level is set to query, alarms about the failures in collecting column statistics and pushing down SQL statements will trigger the diagnosis. If resource_track_level is set to operator, all alarms will trigger the diagnosis.

Whether a SQL plan will be diagnosed depends on the settings of resource_track_cost. A SQL plan will be diagnosed only if its execution cost is greater than resource_track_cost. You can use the EXPLAIN keyword to check the plan execution cost.

The SQL self-diagnosis function is affected by the enable_analyze_check parameter. Ensure that the function is enabled before using it.

If a large number of statements are executed, certain data may fail to be collected due to memory control. In this case, you can increase the value of instr_unique_sql_count.

Alarms

Currently, the following performance alarms will be reported:

  • Some column statistics are not collected.

An alarm will be reported if some column statistics are not collected. (The current feature is a lab feature. Contact Huawei technical support before using it.) For details about the optimization, see Updating Statistics and Optimizing Statistics.

An alarm will also be reported if column statistics are not collected for queries on OBS foreign tables. (The current feature is a lab feature. Contact Huawei technical support before using it.) The performance of the ANALYZE statement executed for OBS foreign tables is poor and seems inefficient for a simple query. Run this statement as needed.

Example alarms:

No statistics about a table are not collected.

Statistic Not Collect:
    schema_test.t1

The statistics about a single column are not collected.

Statistic Not Collect:
    schema_test.t2(c1,c2)

The statistics about multiple columns are not collected.

Statistic Not Collect:
    schema_test.t3((c1,c2))

The statistics about a single column and multiple columns are not collected.

Statistic Not Collect:
    schema_test.t4(c1,c2)    schema_test.t4((c1,c2))
  • SQL statements are not pushed down.
    The cause details are displayed in the alarms. For details about the optimization, see Optimizing Statement Pushdown.
    • If the pushdown failure is caused by functions, the function names are displayed in the alarm.
    • If the pushdown failure is caused by syntax, the alarm indicates that the syntax does not support pushdown. For example, the syntax containing the With Recursive, Distinct On, or Row expression, and syntax whose return value is of record type do not support pushdown.

Example alarms:

SQL is not plan-shipping, reason : "With Recursive" can not be shipped"
SQL is not plan-shipping, reason : "Function now() can not be shipped"
SQL is not plan-shipping, reason : "Function string_agg() can not be shipped"
  • In a hash join, the larger table is used as the inner table.

An alarm will be reported if the number of rows in the inner table reaches or exceeds 10 times of that in the outer table, more than 100 thousand of inner-table rows are processed on each DN in average, and the join statement has spilled to disks. You can view the query_plan column in GS_WLM_SESSION_HISTORY to check whether the hash join is used. For details about the optimization, see Hint-based Tuning.

Example alarms:

PlanNode[7] Large Table is INNER in HashJoin "Vector Hash Aggregate"
  • nestloop is used in a large-table equivalent join.

An alarm will be reported if nestloop is used in an equivalent join where more than 100 thousand of the larger-table rows are processed on each DN in average. You can view the query_plan column of GS_WLM_SESSION_HISTORY to check whether nestloop is used. For details about the optimization, see GS_WLM_SESSION_HISTORY.

Example alarms:

PlanNode[5] Large Table with Equal-Condition use Nestloop"Nested Loop"
  • A large table is broadcasted.

An alarm will be reported if more than 100 thousand of rows are broadcast on each DN in average. For details about the optimization, see Hint-based Tuning.

Example alarms:

PlanNode[5] Large Table in Broadcast "Streaming(type: BROADCAST dop: 1/2)"
  • Data skew occurs.

An alarm will be reported if the number of rows processed on any DN exceeds 100 thousand, and the number of rows processed on a DN reaches or exceeds 10 times of that processed on another DN. For the optimization, see Case: Selecting an Appropriate Distribution Key and Optimizing Data Skew.

Example alarms:

PlanNode[6] DataSkew:"Seq Scan", min_dn_tuples:0, max_dn_tuples:524288
  • Estimation is inaccurate.

An alarm will be reported if the maximum number or the estimated maximum number of rows processed on a DN is over 10 thousand, and the larger number reaches or exceeds 10 times of the smaller one. For details about the optimization, see Hint-based Tuning.

Example alarms:

PlanNode[5] Inaccurate Estimation-Rows: "Hash Join" A-Rows:0, E-Rows:52488

Restrictions

  1. An alarm contains a maximum of 2048 characters. If the length of an alarm exceeds this value (for example, a large number of long table names and column names are displayed in the alarm when their statistics are not collected), a warning instead of an alarm will be reported.
    WARNING, "Planner issue report is truncated, the rest of planner issues will be skipped"
  2. If a query statement contains the Limit operator, alarms of operators lower than Limit will not be reported.
  3. For alarms about data skew and inaccurate estimation, only alarms on the lower-layer nodes in a plan tree will be reported. This is because the same alarms on the upper-level nodes may be triggered by problems on the lower-layer nodes. For example, if data skew occurs on the Scan node, data skew may also occur in operators (for example, Hashagg) at the upper layer.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback