このページは、お客様の言語ではご利用いただけません。Huawei Cloudは、より多くの言語バージョンを追加するために懸命に取り組んでいます。ご協力ありがとうございました。

Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
Software Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

ANALYZE | ANALYSE

Updated on 2024-12-18 GMT+08:00

Function

ANALYZE collects statistics about table contents in databases, and stores the results in the PG_STATISTIC system catalog. The execution plan generator uses these statistics to determine which one is the most effective execution plan.

  • If no parameters are specified, ANALYZE analyzes each table and partitioned table in the database. You can also specify table_name, column, and partition_name to limit the analysis to a specified table, column, or partitioned table.
  • Users who can execute ANALYZE on a specific table include the owner of the table, owner of the database where the table is, users with the ANALYZE permission on the table, users who are granted the gs_role_analyze_any role, and users with the SYSADMIN attribute.
  • To collect statistics using percentage sampling, you must have the ANALYZE and SELECT permissions.
  • ANALYZE and ANALYSE VERIFY are used to check whether data files of common tables (row-store and column-store tables) in a database are damaged. Currently, this function does not support HDFS tables.
  • If the enable_analyze_partition parameter is enabled and the table-level incremental_analyze parameter is set for a partitioned table, the ANALYZE statement is executed on partitions lacking statistics or with data changes. The statistics of the partition main table are then generated by combining the partition statistics.

Precautions

  • Only cluster versions 8.1.1 and later support using anonymous blocks, transaction blocks, functions, or stored procedures to perform the ANALYZE operation on an unsharded table.
  • However, for analyzing an entire database, the ANALYZE operation of each table is in different transactions. Therefore, the current version does not support the ANALYZE execution for the entire database in anonymous blocks, transaction blocks, functions, or stored procedures.
  • Statistics updates of PG_CLASS related columns cannot be rolled back.
  • Most ANALYZE VERIFY operations are used for abnormal scenario detection, and require a release version. Remote read is not triggered in the ANALYZE VERIFY scenario. Therefore, the remote read parameter does not take effect. If the system detects that the page is damaged due to an error in the key system table, the system reports an error and stops the detection.
WARNING:
  • If more than 10% of a table's records are added or modified at once, explicitly execute the ANALYZE operation.
  • For more information about development and design specifications, see Development and Design Proposal.

Syntax

  • Collect statistics information about a table.
    1
    2
    { ANALYZE | ANALYSE } [ { VERBOSE | LIGHT | FORCE | PREDICATE } ]
        [ table_name [ ( column_name [, ...] ) ] ];
    
  • Collect statistics about a partition.
    1
    2
    3
    { ANALYZE | ANALYSE } [ { VERBOSE | LIGHT | FORCE } ]
        [ table_name [ ( column_name [, ...] ) ] ]
        PARTITION ( partition_name ) ;
    
    NOTE:
    • Ordinary partitioned tables support the syntax for collecting statistics on a specific partition, but do not support the function of collecting statistics on a specific partition. ANALYZE on a specified partition will cause a warning message.
    • Temporary sampling tables cannot be used to collect partition statistics.
    • Multi-column statistics and expression statistics on partitions are not supported.
  • Collect statistics about a foreign table.
    1
    2
    { ANALYZE | ANALYSE } [ VERBOSE ]
        { foreign_table_name | FOREIGN TABLES };
    
  • Collect statistics about multiple columns.
    1
    2
    {ANALYZE | ANALYSE} [ VERBOSE ]
        table_name (( column_1_name, column_2_name [, ...] ));
    
    NOTE:
    • To sample data in percentage, set default_statistics_target to a negative number.
    • The statistics about a maximum of 32 columns can be collected at a time.
    • You are not allowed to collect statistics about multiple columns in system catalogs, or HDFS foreign tables.
  • Check the data files in the current database.
    1
    {ANALYZE | ANALYSE} VERIFY {FAST|COMPLETE};
    
    NOTE:
    • All operations on the database are supported. Because many tables are involved, you are advised to save the result in redirection mode: gsql -d database -p port -f "verify.sql"> verify_warning.txt 2>&1.
    • HDFS tables (internal and foreign tables), temporary tables, and unlog tables are not supported.
    • Only the NOTICE message is displayed for external tables. The detection of internal tables is included in the external tables that depend on them, but it is not shown externally.
    • This command can be used to process tolerant errors. The assert operation in a debug version may cause the core to fail to execute commands. Therefore, you are advised to perform this operation in a release version.
    • If a key system table is damaged during a full database operation, an error is reported and the operation stops.
  • Check the data files of tables and indexes.
    1
    {ANALYZE | ANALYSE} VERIFY {FAST|COMPLETE} table_name|index_name [CASCADE];
    
    NOTE:
    • You can perform operations on common tables and index tables, but cannot perform CASCADE operations on index tables. The reason is that CASCADE is used to process all index tables of the primary table. When the index table is checked separately, CASCADE is not required.
    • HDFS tables (internal and foreign tables), temporary tables, and unlog tables are not supported.
    • When the primary table is checked, the internal tables of the primary table, such as the toast table and cudesc table, are also checked.
    • When the system displays a message indicating that the index table is damaged, you are advised to run the reindex command to recreate the index.
  • Check the data file of the table partition.
1
{ANALYZE | ANALYSE} VERIFY {FAST|COMPLETE} table_name PARTITION {(partition_name)}[CASCADE];
NOTE:
  • You can detect a single partition of a table, but cannot perform the CASCADE operation on index tables.
  • HDFS tables (internal and foreign tables), temporary tables, and unlog tables are not supported.

Parameter Description

  • VERBOSE

    Enables the display of progress messages.

    NOTE:

    If this parameter is specified, progress information is displayed by ANALYZE to indicate the table that is being processed, and statistics about the table are printed.

  • LIGHT

    In lightweight mode, the statistics collected for a table are saved to the memory instead of being written to the system catalog. A level-1 lock is added to the table during Analyze.

  • FORCE

    In FORCE mode, table statistics can be forcibly refreshed when they are locked.

  • PREDICATE

    In PREDICATE mode, statistics are calculated only for the currently identified predicate columns. Predicate information is collected during query parsing, and dynamic sampling supports predicate column sampling. For details, see the GUC parameter analyze_predicate_column_threshold. This is supported only by clusters of version 9.1.0.100 or later.

  • table_name

    Specifies the name (possibly schema-qualified) of a specific table to analyze. If omitted, all regular tables (but not foreign tables) in the current database are analyzed.

    Currently, you can use ANALYZE to collect statistics about row-store tables, column-store tables, HDFS tables, ORC- or CARBONDATA-formatted OBS foreign tables, and foreign tables for collaborative analysis.

    Value range: an existing table name

  • column_name, column_1_name, column_2_name

    Specifies the name of a specific column to analyze. All columns are analyzed by default.

    Value range: an existing column name

  • partition_name

    Assumes the table is a partitioned table. You can specify partition_name following the keyword PARTITION to analyze the statistics of this table. Currently the partitioned table supports the syntax of analyzing a partitioned table, but does not execute this syntax.

    Value range: a partition name in a table

  • foreign_table_name

    Specifies the name (possibly schema-qualified) of a specific table to analyze. The data of the table is stored in HDFS.

    Value range: an existing table name

  • FOREIGN TABLES

    Analyzes HDFS foreign tables stored in HDFS and accessible to the current user.

  • index_name

    Name of the index table to be analyzed. The name may contain the schema name.

    Value range: an existing table name

  • FAST|COMPLETE

    For row-store tables, the CRC and page header of row-store tables are verified in FAST mode. If the verification fails, an alarm is reported. In COMPLETE mode, parse and verify the pointers and tuples of row-store tables. For column-store tables, the CRC and magic of column-store tables are verified in FAST mode. If the verification fails, an alarm is reported. In COMPLETE mode, parse and verify CU of column-store tables.

  • CASCADE

    In CASCADE mode, all indexes of the current table are checked.

Examples

  • Do ANALYZE to update statistics in the customer_info table:
    1
    ANALYZE customer_info;
    
  • Do ANALYZE VERBOSE to update statistics and display table information in the customer_info table:
    1
    2
    3
    4
    5
    ANALYZE VERBOSE customer_info;
    INFO:  analyzing "cstore.pg_delta_3394584009"(cn_5002 pid=53078)
    INFO:  analyzing "public.customer_info"(cn_5002 pid=53078)
    INFO:  analyzing "public.customer_info" inheritance tree(cn_5002 pid=53078)
    ANALYZE
    

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback