Other Optimizer Options
cost_model_version
Parameter description: Specifies the version of the optimizer cost model. It can be regarded as a protection parameter to disable the latest optimizer cost model and keep consistent with the plan of the earlier version. This parameter can be set at the PDB level.
Parameter type: integer.
Unit: none
Value range: 0, 1, 2, 3, 4, and 5
- 0 indicates that the latest cost estimation model is used. The current version is equivalent to 5.
- 1: The original cost estimation model is used.
- 2: The enhanced COALESCE expression, hash join cost, and semi/anti join cost are used for estimation on the basis of 1.
- 3: The boundary correction estimator is used to estimate the NDV on the basis of 2. The hint of indexscan can be applied to indexonlyscan.
- 4: Partition-level statistics are used for cost estimation on the basis of 3.
- 5: The cost estimation of outer join calculation filter is enhanced on the basis of 4, making the cost-based query rewriting more accurate. The calculation of filter conditions of outer join foreign tables during selectivity calculation is optimized, which can be controlled by the enable_poisson_outer_optimization parameter.
Default value: 0. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: When upgrading the database, you are advised to set this parameter the same as that of the source version. When installing a new environment, you are advised to set this parameter to the default value.
Risks and impacts of improper settings: If this parameter is modified, many SQL plans may be changed. Therefore, exercise caution when changing this parameter.
enable_csqual_pushdown
Parameter description: Specifies whether to push down filter criteria for a rough check during query. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on: A rough check is performed with filter criteria pushed down during query.
- off: A rough check is performed without filter criteria pushed down during query.
Default value: on. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a SUSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: If a large amount of data needs to be filtered during query, disabling filter pushdown may deteriorate performance.
explain_dna_file
Parameter description: Specifies the object file to be exported when explain_perf_mode is set to run. This parameter can be set at the PDB level.
Parameter type: string.
Unit: none
Value range: absolute path plus the file name in .csv format.
Default value: "". In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. Set this parameter based on the service scenario if necessary.
Risks and impacts of improper settings: Improper settings may cause file write overhead.
explain_perf_mode
Parameter description: Specifies the display format of the explain command. This parameter can be set at the PDB level.
Parameter type: enumerated type
Unit: none
Value range: normal, pretty, summary, and run
- normal indicates that the default printing format is used.
- pretty indicates a new format improved by using GaussDB. The new format contains a plan node ID, directly and effectively analyzing performance.
- summary indicates that the analysis result on this information is printed in addition to the printed information specified by pretty.
- run indicates that the system exports the printed information specified by summary as a CSV file for further analysis.

- The display sequence may vary greatly according to the display format of explain. The examples of the normal and pretty formats are described as follows:
Example of the normal format:
QUERY PLAN ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- Sort (cost=21.23..21.23 rows=1 width=306) Sort Key: supplier.s_suppkey CTE revenue -> HashAggregate (cost=12.88..12.88 rows=1 width=76) Group By Key: lineitem.l_suppkey -> Partition Iterator (cost=0.00..12.87 rows=1 width=44) Iterations: 7 -> Partitioned Seq Scan on lineitem (cost=0.00..12.87 rows=1 width=44) Filter: ((l_shipdate >= '1996-01-01 00:00:00'::timestamp(0) without time zone) AND (l_shipdate < '1996-04-01 00:00:00'::timestamp without time zone)) Selected Partitions: 1..7 InitPlan 2 (returns $3) -> Aggregate (cost=0.02..0.03 rows=1 width=64) -> CTE Scan on revenue (cost=0.00..0.02 rows=1 width=32) -> Nested Loop (cost=0.00..8.30 rows=1 width=306) -> CTE Scan on revenue (cost=0.00..0.02 rows=1 width=40) Filter: (total_revenue = $3) -> Partition Iterator (cost=0.00..8.27 rows=1 width=274) Iterations: 7 -> Partitioned Index Scan using supplier_s_suppkey_idx on supplier (cost=0.00..8.27 rows=1 width=274) Index Cond: (s_suppkey = revenue.supplier_no) Selected Partitions: 1..7 (21 rows)
Example of the pretty format:
id | operation | E-rows | E-width | E-costs ----+------------------------------------------------------------------------------+--------+---------+---------------- 1 | -> Sort | 1 | 306 | 21.230..21.235 2 | -> Nested Loop (3,9) | 1 | 306 | 0.000..8.303 3 | -> CTE Scan on revenue | 1 | 40 | 0.000..0.022 4 | -> HashAggregate [3, CTE revenue] | 1 | 76 | 12.875..12.885 5 | -> Partition Iterator | 1 | 44 | 0.000..12.865 6 | -> Partitioned Seq Scan on lineitem | 1 | 44 | 0.000..12.865 7 | -> Aggregate [4, InitPlan 2 (returns $3)] | 1 | 64 | 0.022..0.033 8 | -> CTE Scan on revenue | 1 | 32 | 0.000..0.020 9 | -> Partition Iterator | 1 | 274 | 0.000..8.270 10 | -> Partitioned Index Scan using supplier_s_suppkey_idx on supplier | 1 | 274 | 0.000..8.270 (10 rows) Predicate Information (identified by plan id) --------------------------------------------------------------------------------------------------------------------------------------------------------------- 5 --Partition Iterator Iterations: 7 6 --Partitioned Seq Scan on lineitem Filter: ((l_shipdate >= '1996-01-01 00:00:00'::timestamp(0) without time zone) AND (l_shipdate < '1996-04-01 00:00:00'::timestamp without time zone)) Selected Partitions: 1..7 3 --CTE Scan on revenue Filter: (total_revenue = $3) 9 --Partition Iterator Iterations: 7 10 --Partitioned Index Scan using supplier_s_suppkey_idx on supplier Index Cond: (s_suppkey = revenue.supplier_no) Selected Partitions: 1..7 (12 rows)
Note: The plan blocks in the preceding two formats are different display formats of the same plan. In the pretty format, the parts in bold are the CET and InitPlan plan blocks, which may be inserted in the middle of the join block. When the join block is being read, skip the CTE and InitPlan blocks to find the inner table of the corresponding join block.
- The pretty format displays only one plan. When CREATE RULE is used to create a rule, multiple plans may be generated for the SQL statement executed. Therefore, the normal format is recommended. The following is an example:
gaussdb=# CREATE TABLE another_table (id int, name text); CREATE TABLE gaussdb=# CREATE TABLE my_table (id int, name text); CREATE TABLE gaussdb=# CREATE RULE my_rule AS ON INSERT TO my_table gaussdb-# WHERE NEW.id > 5 gaussdb-# DO INSTEAD (INSERT INTO another_table VALUES (NEW.id, 'Some Data');); CREATE RULE gaussdb=# EXPLAIN INSERT INTO my_table VALUES (5, 'Test Name'); QUERY PLAN ----------------------------------------------------------- Insert on my_table (cost=0.00..0.01 rows=1 width=0) -> Result (cost=0.00..0.01 rows=1 width=0) Insert on another_table (cost=0.00..0.01 rows=1 width=0) -> Result (cost=0.00..0.01 rows=1 width=0) One-Time Filter: false (6 rows) gaussdb=# SET explain_perf_mode=pretty; SET gaussdb=# EXPLAIN INSERT INTO my_table VALUES (5, 'Test Name'); id | operation | E-rows | E-width | E-costs ----+-----------------------------+--------+---------+-------------- 1 | -> Insert on another_table | 1 | 0 | 0.000..0.010 2 | -> Result | 1 | 0 | 0.000..0.010 (2 rows) Predicate Information (identified by plan id) ----------------------------------------------- 2 --Result One-Time Filter: false (2 rows) gaussdb=# DROP TABLE my_table; DROP TABLE gaussdb=# DROP TABLE another_table; DROP TABLE
Default value: pretty. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. You can set another value if necessary.
Risks and impacts of improper settings: Different formats affect the amount of information provided by EXPLAIN.
analysis_options
Parameter description: Specifies whether to enable function options in the corresponding options to use the corresponding location functions, including data verification and performance statistics. For details, see the description of the value range. This parameter can be set at the PDB level.
Parameter type: string.
Unit: none
Value range:
Each time this parameter is set, the current value is changed through the set operation.
gaussdb=# show analysis_options; analysis_options ------------------------------------------------------------ ALL,on(),off(LLVM_COMPILE,HASH_CONFLICT,STREAM_DATA_CHECK) (1 row) gaussdb=# SET analysis_options = 'on(LLVM_COMPILE)'; SET gaussdb=# show analysis_options; analysis_options ----------------------------------------------------------- ALL,on(LLVM_COMPILE),off(HASH_CONFLICT,STREAM_DATA_CHECK) (1 row) gaussdb=# SET analysis_options = 'on(HASH_CONFLICT)'; SET gaussdb=# show analysis_options; analysis_options ----------------------------------------------------------- ALL,on(LLVM_COMPILE,HASH_CONFLICT),off(STREAM_DATA_CHECK) (1 row) gaussdb=# SET analysis_options = 'off(ALL)'; SET gaussdb=# show analysis_options; analysis_options ------------------------------------------------------------ ALL,on(),off(LLVM_COMPILE,HASH_CONFLICT,STREAM_DATA_CHECK) (1 row)
- LLVM_COMPILE: The codegen compilation time of each thread is displayed on the explain performance page.
- HASH_CONFLICT: The log in the gs_log directory of the database node process displays the statistics of the hash table, including the hash table size, hash link length, and hash conflict.
- STREAM_DATA_CHECK: A CRC check is performed on data before and after network data transmission.
Default value: off(ALL), which indicates that no location function is enabled. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: If manual setting is required, check whether the expected result meets the requirements.
cost_param
Parameter description: Specifies use of different estimation methods in specific customer scenarios, allowing the cost model estimation more accurate. You can change the parameter values to enable different methods. This parameter can control multiple methods at the same time. Each method is controlled by a number. If the result of the AND operation between the parameter value and the number corresponding to the method is not 0, the method is enabled. This parameter can be set at the PDB level.
- When cost_param & 1 is set to a value other than 0, an improved mechanism is used for connecting the selectivity of non-equi-joins. This method is more accurate for estimating the selectivity of joins between two identical tables. At present, if cost_param & 1 is set to a value other than 0, the path is not used. That is, a better formula is selected for calculation.
- When cost_param & 2 is set to a value other than 0, the selectivity is estimated based on multiple filter criteria. The lowest selectivity among all filter criteria, but not the product of the selectivities for two tables under a specific filter criterion, is used as the total selectivity. This method is more accurate when a close correlation exists between the columns to be filtered.
Parameter type: integer.
Unit: none
Value range: 0 to 2147483647
Default value: 0. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Set this parameter after determining the scenario where the selectivity needs to be adjusted.
Risks and impacts of improper settings: The estimated cost may not meet the expectation. You are advised to confirm the parameter settings after thorough tests.
var_eq_const_selectivity
Parameter description: Specifies whether to use the new selectivity model to estimate the integer const selectivity. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on: The new selectivity model is used to calculate the selectivity of the integer const.
- If an integer does not fall into the MCV, is not NULL, and falls into the histogram, the left and right boundaries of the histogram are used for estimation. If the integer does not fall into the histogram, the number of rows in the table is used for estimation.
- If the integer is NULL or falls into the MCV, the original logic is used to calculate the selectivity.
- off: The original selectivity calculation model is used.
Default value: off. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. If you need to enable the parameter, fully test and evaluate whether the performance can be improved in the corresponding scenario.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.
enable_partitionwise
Parameter description: Specifies whether to select an intelligent algorithm for joining partitioned tables. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on indicates that an intelligent algorithm is selected.
- off indicates that an intelligent algorithm is not selected.
Default value: off. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: Before enabling this function, ensure that SMP is enabled. Otherwise, the performance of the join operation on partitioned tables in non-SMP scenarios may be affected.
enable_partition_pseudo_predicate
Parameter description: Specifies whether to rewrite pseudo-predicates to calculate the selectivity of query in a specified partition. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on indicates that pseudo-predicate rewriting is used.
- off indicates that pseudo-predicate rewriting is not used.
Default value: off. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. If you need to enable the parameter, fully test and evaluate whether the performance can be improved in the corresponding scenario.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.
partition_page_estimation
Parameter description: Specifies whether to optimize the estimation of partitioned table pages based on the pruning result. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on: The pruning result is used to optimize the page estimation.
- off: The pruning result is not used to optimize the page estimation.
Default value: off. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. If you need to enable the parameter, fully test and evaluate whether the performance can be improved in the corresponding scenario.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.
partition_iterator_elimination
Parameter description: Specifies whether to eliminate the partition iteration operator to improve execution efficiency when the partition pruning result of a partitioned table is a partition. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on: The partition iteration operator is eliminated.
- off: The partition iteration operator is not eliminated.
Default value: on. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: If this parameter is disabled, the query performance may deteriorate.
enable_functional_dependency
Parameter description: Specifies whether the statistics about multiple columns generated by ANALYZE contain function dependency statistics and whether the function dependency statistics are used to calculate the selectivity. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on: The multi-column statistics generated after ANALYZE is executed contain function dependency statistics, and the function dependency statistics are used to calculate selectivity.
- off: The multi-column statistics generated after ANALYZE is executed do not contain function dependency statistics, and the function dependency statistics are not used to calculate selectivity.
Default value: off. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. If you need to enable the parameter, fully test and evaluate whether the performance can be improved in the corresponding scenario.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.
rewrite_rule
Parameter description: Specifies the optional query rewriting rules that are enabled. Some query rewriting rules are optional. Enabling them cannot always improve the query efficiency. In a specific customer scenario, you can set the query rewriting rules through this GUC parameter to achieve optimal query efficiency. This parameter can be set at the PDB level.
This parameter specifies the combination of query rewriting rules. For example, if there are multiple rewriting rules rule1, rule2, rule3, and rule4, you can perform settings as follows:
set rewrite_rule=rule1; -- Enable query rewriting rule rule1 set rewrite_rule=rule2, rule3; -- Enable query rewriting rules rule2 and rule3 set rewrite_rule=none; -- Disable all optional query rewriting rules
Parameter type: enumerated type
Unit: none
Value range:
- none: No optional query rewriting rules are used.
- lazyagg: The Lazy Agg query rewriting rules are used to eliminate aggregation operations in subqueries.
- magicset: The Magic Set query rewriting rules are used to associate subqueries which have aggregation operators with the main query in advance to reduce repeated scanning of sublinks.
- uniquecheck: The Unique Check query rewriting rules are used to optimize the subquery statements in target columns without agg and check whether the number of returned rows is 1.
- intargetlist: The In Target List query rewriting rules are used to improve subqueries in the target column.
- predpushnormal: The Predicate Push query rewriting rules are used to push the predicate condition to the subquery.
- predpushforce: The Predicate Push query rewriting rules are used to push down predicate conditions to subqueries and use indexes as much as possible for acceleration.
- predpush: The optimal plan is selected based on the cost in predpushnormal and predpushforce.
- disable_pullup_expr_sublink: The optimizer is not allowed to pull up sublinks of the expr_sublink type. For details about sublink classification and pull-up principles, see "SQL Optimization > Typical SQL Optimization Methods > Optimizing Subqueries" in Developer Guide.
- enable_sublink_pullup_enhanced: Enhanced sublink query rewriting rules are used, including unrelated sublink pull-up of the WHERE and HAVING clauses and WinMagic rewriting optimization.
- disable_pullup_not_in_sublink: The optimizer is not allowed to pull up sublinks related to NOT IN. For details about sublink classification and pull-up principles, see "SQL Optimization > Typical SQL Optimization Methods > Optimizing Subqueries" in Developer Guide.
- disable_rownum_pushdown: The filter criterion ROWNUM in the parent query cannot be pushed down to the subquery.
- disable_windowagg_pushdown: The filter criterion of the window function in the parent query cannot be pushed down to the subquery.
- cse_rewrite_opt: The common expression CSE is used to replace the common expression in the Having subquery with windowAgg.
- groupby_pushdown_subquery: The Group By clause and aggregate function are pushed down to the subquery.
- enable_sublink_pullup_rownum: The optimizer can be used to promote subjoins when the SQL statement contains the ROWNUM pseudocolumn.
Default value: magicset and groupby_pushdown_subquery. In the PDB scenario, if this parameter is not set, the global setting is inherited.

In the current version, the partialpush and disablerep parameters can be set but do not take effect.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. If other rewriting is necessary, you are advised to specify the query rewriting scenario and enable only the corresponding rewriting rule.
Risks and impacts of improper settings: The cost of rewriting statements in some scenarios may not be optimal. You are advised to use the corresponding rules after thorough tests.
costbased_rewrite_rule
Parameter description: Specifies whether to enable the cost-based evaluation policy for a specified query rewriting rule. Some query rewriting rules support cost-based evaluation of whether to rewrite SQL statements. In this way, the kernel can generate better query plans and improve SQL execution efficiency. In the multi-tenancy scenario, this parameter can be set at the PDB level.
This parameter is mutually exclusive with the GUC parameter rewrite_rule. If both costbased_rewrite_rule and rewrite_rule are set, the rule configuration related to rewrite_rule does not take effect.
This parameter can be used to set multiple cost-based query rewriting rules. The parameters are not mutually exclusive. For example, if there are multiple rewriting rules rule1, rule2, rule3, and rule4, you can perform settings as follows:
set costbased_rewrite_rule= rule1; -- Enable the cost-based evaluation policy of query rewriting rule rule1. set costbased_rewrite_rule= rule2,rule3; -- Enable the cost-based evaluation policy for query rewriting rules rule2 and rule3. set costbased_rewrite_rule=none; -- Disable the cost-based evaluation policy for all optional query rewriting rules.
Parameter type: string.
Unit: none
Value range:
- none: No cost-based query rewriting policy is used.
- pullup_subquery: The cost-based query rewriting policy is enabled for the rewriting rules of simple subquery expansion.
- pullup_sublink_any_exists: The cost-based query rewriting policy is enabled for the rewriting rules of non-correlated sublinks of the ANY type and related sublinks of the [NOT] EXISTS type in a single or AND condition.
- pullup_not_in_sublink: The cost-based query rewriting policy is enabled for the rewriting rules of non-correlated sublinks of the NOT IN type in a single or AND condition. This option is mutually exclusive with the disable_pullup_not_in_sublink option of the GUC parameter rewrite_rule. When this option is enabled, the disable_pullup_not_in_sublink option of rewrite_rule does not take effect.
- pullup_expr_sublink: The rewriting scenarios of expression sublinks, non-correlated sublinks of the ANY type, and related sublinks of [NOT] the EXISTS type in the OR condition take effect. This option is mutually exclusive with the disable_pullup_expr_sublink, enable_sublink_pullup_enhanced, and magicset options of the GUC parameter rewrite_rule. When this option is enabled, the corresponding options of rewrite_rule do not take effect.
- intargetlist: The cost-based query rewriting policy is enabled for the rewriting rules of the sublinks of related expression in the TargetList. This option is mutually exclusive with the intargetlist and magicset options of the GUC parameter rewrite_rule. When this option is enabled, the intargetlist and magicset options of rewrite_rule does not take effect.
- enable_sublink_pullup_enhanced: The cost-based query rewriting policy is enabled for the rewriting rules of expression sublinks in the enhanced scenario. This option is affected by the pullup_expr_sublink option. This option takes effect only when the pullup_expr_sublink option is enabled for the expression sublink rewriting scenario in the AND condition. It is mutually exclusive with the enable_sublink_pullup_enhanced option of the GUC parameter rewrite_rule. When this option is enabled, the enable_sublink_pullup_enhanced option of rewrite_rule does not take effect.
Default value: intargetlist, pullup_expr_sublink, pullup_not_in_sublink, and enable_sublink_pullup_enhanced. In the PDB scenario, if this parameter is not set, the global setting is inherited.

For details about parameter application scenarios, see "SQL Optimization > Optimization Cases > Case: Adjusting the GUC Parameter costbased_rewrite_rule for Cost-based Query Rewriting" in Developer Guide.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Simple subquery expansion (specified by the pullup_subquery parameter), ANY non-correlated sublinks, and [NOT] EXISTS related sublinks (specified by the pullup_sublink_any_exists parameter) are rewritten based on the default rules to generate an execution plan, because enabling the cost rewriting policy brings some performance overhead, therefore, you are advised not to enable the cost-based rewriting policy for such rules.
Risks and impacts of improper settings: If the rewriting rules related to this parameter are disabled, rewriting is performed based on rules. If the cost evaluation is missing, the generated plans in some scenarios may be poor, affecting the query performance.
costbased_rewrite_rule_max_iterations
Parameter description: In the plan generation phase of an SQL statement, if the number of conditions that meet the cost evaluation in the same rule exceeds the value of this parameter, the cost-based evaluation policy is disabled for the conditions that exceed the threshold in the current request and converted to the rule-based rewriting policy. This parameter takes effect when the cost-based evaluation policy is enabled in the query rewriting phase. In the multi-tenancy scenario, this parameter can be set at the PDB level.
Parameter type: integer.
Unit: none
Value range: 0 to 1000
Default value: 10. In the PDB scenario, if this parameter is not set, the global setting is inherited.

- This parameter is affected by the GUC parameter costbased_rewrite_rule. This parameter takes effect when the value of costbased_rewrite_rule is a value other than none.
- If this parameter is set to 0, the cost-based rewriting policy is disabled for the current SQL statement.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: Adjust the value based on service requirements. If the value is too large or too small, extra performance overhead may occur, affecting the final query performance.
costbased_rewrite_rule_timeout
Parameter description: In the plan generation phase of an executed SQL statement, if the overall cost evaluation time using each rule exceeds the timeout interval specified by this parameter, the cost-based evaluation policy is disabled for subsequent processes of the current request and the rule-based rewriting policy is used. This parameter takes effect when the cost-based evaluation policy is enabled in the query rewriting phase. In the multi-tenancy scenario, this parameter can be set at the PDB level.
Parameter type: integer.
Unit: millisecond
Value range: –1 to 300000
Default value: –1. In the PDB scenario, if this parameter is not set, the global setting is inherited.

- This parameter is affected by the GUC parameter costbased_rewrite_rule. This parameter takes effect when the value of costbased_rewrite_rule is a value other than none.
- If this parameter is set to 0, the cost-based rewriting policy is disabled for the current SQL statement.
- If this parameter is set to –1, the timeout interval control is disabled for the current SQL statement.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: Adjust the value based on service requirements. If the value is too large or too small, extra performance overhead may occur, affecting the final query performance.
enable_pbe_optimization
Parameter description: Specifies whether the optimizer optimizes the query plan for statements executed in parse bind execute (PBE) mode. The optimization principle is to use gplan for FQS. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on: The optimizer optimizes the query plan for statements executed in PBE mode and the gplan is used for FQS.
- off indicates that the optimization is not performed.
Default value: off. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a SUSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. If you do not use gplan for FQS, set this parameter to off.
Risks and impacts of improper settings: If this parameter is disabled, the cplan may be used in some scenarios.
enable_global_plancache
Parameter description: Specifies whether to share the cache for the execution plans of statements in PBE queries and stored procedures. Enabling this function can reduce the memory usage of database nodes in high concurrency scenarios. This parameter must be disabled for the multi-tenant database feature (enable_mtd).
When enable_global_plancache is enabled, the default value of local_syscache_threshold is greater than or equal to 16 MB to ensure that GPC takes effect. If the value of local_syscache_threshold is less than 16 MB, set it to 16 MB. If the value is greater than 16 MB, the actual value is used.
Parameter type: Boolean.
Unit: none
Value range:
- on indicates that cache sharing is enabled for the execution plans of statements in PBE queries and stored procedures.
- off indicates no sharing.
Default value: off
Setting method: This is a POSTMASTER parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: If this parameter is disabled, the corresponding plan cache is not shared and the resource usage increases.
gpc_clean_timeout
Parameter description: Specifies the retention period of a shared plan that is not used. When enable_global_plancache is enabled, if a plan in the shared plan list is not used within the period specified by gpc_clean_timeout, the plan will be deleted.
Parameter type: integer.
Unit: second
Value range: 300 to 86400
Default value: 1800, that is, 30 minutes
Setting method: This is a SIGHUP parameter. Set it based on instructions provided in Table 1. For example, if this parameter is set to 300 without a unit, it indicates 300s. If this parameter is set to 30min, it indicates 30 minutes. If the unit is required, the value must be s, min, h, or d.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: If the value is too large, the GPC may occupy too much memory. You are advised to set this parameter to a proper value after thorough tests.
enable_opfusion
Parameter description: Specifies whether to optimize simple addition, deletion, modification, and query operations. This parameter can be set at the PDB level.
The restrictions on simple queries are as follows:
- Only index scan and index-only scan are supported, and the filter criteria of all WHERE statements are on indexes.
- Only single tables can be added, deleted, modified, and queried. JOIN and USING operations are not supported.
- Only row-store tables are supported. Partitioned tables and tables with triggers are not supported.
- Information statistics features of active SQL statements and queries per second (QPS) are not supported.
- Tables that are being scaled out or in are not supported.
- System columns cannot be queried or modified.
- Only simple SELECT statements are supported. For example:
SELECT c3 FROM t1 WHERE c1 = ? and c2 =10;
Only columns in the target table can be queried. Columns c1 and c2 are index columns, which can be followed by constants or parameters. You can use for update.
- Only simple INSERT statements are supported. For example:
INSERT INTO t1 VALUES (?,10,?);
Only one VALUES is supported. The type in VALUES can be a constant or a parameter. RETURNING is not supported.
- Only simple DELETE statements are supported. For example:
DELETE FROM t1 WHERE c1 = ? and c2 = 10;
Columns c1 and c2 are index columns, which can be followed by constants or parameters.
- Only simple UPDATE statements are supported. For example:
UPDATE t1 SET c3 = c3+? WHERE c1 = ? and c2 = 10;
The values modified in column c3 can be constants, parameters, or a simple expression. Columns c1 and c2 are index columns, which can be followed by constants or parameters.
- Simple insertion with a sequence table is supported, for example:
CREATE SEQUENCE SEQ; INSERT INTO t1 VALUES (10, nextval('SEQ'::regclass));
The second column in t1 is sequence auto-increment. This parameter takes effect only when enable_bypass_insert_sequence is enabled.
Parameter type: Boolean.
Unit: none
Value range:
- on: used.
- off: not used.
Default value: on. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: If this parameter is disabled, the query performance may deteriorate.
enable_plsql_opfusion
Parameter description: Optimizes simple add, delete, modify, and query statements in stored procedures to improve SQL execution performance. In the multi-tenancy scenario, this parameter can be set at the PDB level.
For details about restrictions on simple ADD, DELETE, MODIFY, and QUERY statements, see enable_opfusion.

This parameter takes effect only when enable_opfusion is enabled.
Parameter type: Boolean.
Unit: none
Value range:
- on: used.
- off: not used.
Default value: on. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: If this parameter is disabled, the query performance may deteriorate.
sql_beta_feature
Parameter description: Specifies the SQL engine's optional beta features to be enabled, including optimization of row count estimation and query equivalence estimation. These optional features provide optimization for specific scenarios, but performance deterioration may occur in some scenarios for which testing is not performed. In a specific customer scenario, you can set the query rewriting rules through this GUC parameter to achieve optimal query efficiency. This parameter can be set at the PDB level.
This parameter determines the combination of the SQL engine's beta features, for example, feature1, feature2, feature3, and feature4, as follows:
set sql_beta_feature=feature1; -- Enable the beta feature 1 of the SQL engine. set sql_beta_feature=feature2,feature3; -- Enable the beta features 2 and 3 of the SQL engine. set sql_beta_feature=none; -- Disable all optional SQL engine beta features.
Parameter type: enumerated type
Unit: none
Value range:
- none: uses none of the beta optimizer features.
- sel_semi_poisson: uses Poisson distribution to calibrate the equivalent semi-join and anti-join selectivity.
- sel_expr_instr: uses the matching row count estimation method to provide more accurate estimation for instr(col, 'const') > 0, = 0, = 1.
- param_path_gen: generates more possible parameterized paths.
- rand_cost_opt: optimizes the random read cost of tables that have a small amount of data.
- param_path_opt: uses the bloating ratio of the table to optimize the analysis information of indexes.
- page_est_opt: optimizes the relpages estimation for the analysis information of table indexes.
- no_unique_index_first: disables optimization of the primary key index scan path first.
- join_sel_with_cast_func: supports type conversion functions when the number of join rows is estimated.
- canonical_pathkey: The regular path key is generated in advance (pathkey: a set of ordered key values of data).
After this option is enabled, the semantics of the output data of statements such as ORDER BY may be different from that of the standard ones in the case of outer join. Contact Huawei technical support and determine whether to enable this parameter.
- index_cost_with_leaf_pages_only: specifies whether index leaf nodes are included when the index cost is estimated.
- a_style_coerce: enables the Decode type conversion rule to be compatible with A. For details, see the part related to case processing in A-compatible mode in "SQL Reference > Type Conversion > UNION, CASE, and Related Constructs" in Developer Guide.
- partition_fdw_on: supports the creation of SQL statements related to GaussDB foreign tables based on partitioned tables.
- predpush_same_level: enables the predpush hint to control parameterized paths at the same layer.
- enable_plsql_smp: enables parallel execution of queries in stored procedures. Currently, only one query can be executed in parallel at a time, and no parallel execution plan is generated for autonomous transactions and queries in exceptions.
- disable_bitmap_cost_with_lossy_pages: disables the computation of the cost of lossy pages in the bitmap path cost.
- enable_upsert_execute_gplan: allows execution through gplan in the PBE scenario, if the UPDATE clause in the ON DUPLICATE KEY UPDATE statement contains parameters.
- disable_merge_append_partition: disables the generation of the Merge Append path for partitioned tables.
- disable_fastpath_insert: disables the executor to optimize the insert operation for partitioned tables.
- disable_text_expr_flatten: disables the function of automatically inlining expressions during comparison between text and numeric types (numeric, bigint).
Default value: none. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. This parameter specifies multiple query optimization and compatibility behaviors. The settings of some options are risky. Therefore, you are advised to use this parameter with caution. If you want to adjust the value, understand the parameter meaning and adjust the value with caution to avoid risks caused by misoperations.
Risks and impacts of improper settings: Sufficient tests are required. Otherwise, unexpected risks may occur.
default_statistics_target
Parameter description: Specifies the default statistics target for table columns without a column-specific target set by running ALTER TABLE SET STATISTICS. This parameter affects only the target number of sampled rows in the statistics. The actual number of sampled rows is also affected by the memory parameter maintenance_work_mem. This parameter can be set at the PDB level.
Parameter type: integer.
Unit: none
Value range: –100 to 10000
- If this parameter is set to a positive value, it indicates the expected number of buckets in the statistics histogram. The number of sampled rows is default_statistics_target multiplied by 300.
- If this parameter is set to a negative value, it indicates the statistical target in percentage. The negative value is converted to the corresponding percentage. For example, -5 indicates 5%. The number of sampled rows is the total number of rows multiplied by 5%.
Default value: 100. In the PDB scenario, if this parameter is not set, the global setting is inherited.

- A larger positive number than the default value increases the time required to do ANALYZE, but might improve the quality of the optimizer's estimates.
- Changing settings of this parameter may result in performance deterioration. If query performance deteriorates, you can:
- Restore to the default statistics.
- Use hints to force the optimizer to use the optimal query plan. For details, see "SQL Optimization > Hint-based Optimization" in Developer Guide.
- If this GUC parameter is set to a negative value, the number of samples is greater than or equal to 2% of the total data volume, and the number of records in user tables is less than 1.6 million, the time taken by running ANALYZE will be longer than that when this parameter uses its default value.
- If this GUC parameter is set to a negative value, AUTOANALYZE does not take effect.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. Before changing settings of this parameter, perform a comprehensive evaluation based on the specific workload and query mode. For tables whose data features are not evenly distributed or whose data volume is large, you may need to manually adjust the value of this parameter.
Risks and impacts of improper settings:
- If the value of default_statistics_target is too large, the time and resource consumption of the ANALYZE operation may increase because more sample data needs to be collected to generate statistics. This can result in increased overhead for database maintenance, especially on large tables.
- If the value of default_statistics_target is too small, the accuracy of statistics may be reduced. As a result, the query optimizer's capability of generating efficient query plans is affected, and the query performance deteriorates.
auto_statistic_ext_columns
Parameter description: Collects statistics about multiple columns based on the first auto_statistic_ext_columns columns of the composite index in the data table. For example, if a composite index is (a,b,c,d,e) and the GUC parameter is set to 3, statistics about multiple columns are generated on columns (a,b) and (a,b,c). Multi-column statistics can enable the optimizer to estimate the cardinality more accurately when combined conditions are used for query. This parameter can be set at the PDB level.

- The system catalog does not take effect.
- The statistics take effect only when the types of all columns support the comparison functions '=' and '<'.
- System pseudocolumns in indexes, such as tableoid and ctid, are not collected.
- By default, distinct values, MCVs without NULL, and MCVs with NULL are collected. If the AI-based cardinality estimation parameter enable_ai_stats is enabled, MCVs are not collected. Instead, models for AI-based cardinality estimation are collected.
- If the index for creating multi-column statistics is deleted and no other index contains the multi-column combination, the multi-column statistics will be deleted in the next ANALYZE operation.
- If the value of this parameter decreases, the new index generates multi-column statistics based on the value of this parameter. The generated multi-column statistics that exceed the value of this parameter will not be deleted.
- If you want to disable the multi-column statistics on a specific combination only, you can retain the value of this parameter and run the DDL command ALTER TABLE tablename disable statistics ((column list)) to disable the statistics on multiple columns in a specific combination.
Parameter type: integer.
Unit: none
Value range: 1 to 4. 1 indicates that statistics about multiple columns are not automatically collected.
Default value: 1. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. When adjusting the value of this parameter, consider the following factors:
- Table size and data characteristics, such as multi-column correlation.
- Query mode, especially the columns and indexes involved in the query.
- Available system resources, such as CPU, memory, and storage space.
Risks and impacts of improper settings:
- If the value is too large, ANALYZE operations may be performed frequently, which increases the database maintenance overhead, especially for large tables.
- If the value is too small, the correlation between columns in the table may not be fully captured. As a result, the query optimizer cannot generate the optimal query plan.
constraint_exclusion
Parameter description: Specifies the query optimizer's use of table constraints to optimize queries. This parameter can be set at the PDB level.
Parameter type: enumerated type
Unit: none
Value range:
- on indicates that constraints for all tables are examined.
- off indicates that constraints for any table are not examined.
- partition indicates that only constraints for inheritance child tables and UNION ALL subqueries are examined.
Default value: partition. In the PDB scenario, if this parameter is not set, the global setting is inherited.

- When constraint_exclusion is set to on, the optimizer compares query conditions with the table's CHECK constraints, and omits scanning tables for which the conditions contradict the constraints.
- Currently, constraint_exclusion is enabled by default only for cases that are often used to implement table partitioning. If this parameter is enabled for all tables, extra planning is imposed on simple queries, which has no benefits. If you have no partitioned tables, disable the parameter.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. If table constraints are frequently used in service queries and these constraints help optimize the query, it may be beneficial to keep enabling this parameter. Otherwise, if these constraints are not helpful for query optimization, disabling this parameter may improve performance.
Risks and impacts of improper settings: The query performance may be affected.
cursor_tuple_fraction
Parameter description: Specifies the optimizer's estimated fraction of a cursor's rows that are retrieved. This parameter can be set at the PDB level.
Parameter type: floating point.
Unit: none
Value range: 0 to 1
Default value: 0.1. In the PDB scenario, if this parameter is not set, the global setting is inherited.

Smaller values of this setting bias the optimizer towards using fast start plans for cursors, which will retrieve the first few rows quickly while perhaps taking a long time to fetch all rows. Larger values put more emphasis on the total estimated time. At the maximum setting of 1.0, cursors are planned exactly like regular queries, considering only the total estimated time and how soon the first rows might be delivered.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. If most rows in the table are frequently accessed, you can increase the value of this parameter.
Risks and impacts of improper settings: The performance may deteriorate.
from_collapse_limit
Parameter description: Specifies whether the optimizer merges subqueries into upper queries based on the resulting FROM list. The optimizer merges subqueries into upper queries if the resulting FROM list would have no more than this many items. This parameter can be set at the PDB level.
Parameter type: integer.
Unit: none
Value range: 1 to 2147483647
Default value: 8. In the PDB scenario, if this parameter is not set, the global setting is inherited.

Smaller values reduce planning time but may lead to inferior execution plans.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. If multiple subqueries need to be processed frequently in a query and these subqueries can be effectively combined to improve query performance, you can increase the value of this parameter. Otherwise, if merging subqueries deteriorates performance or increases the time for the optimizer to generate an execution plan, you can reduce the value.
Risks and impacts of improper settings: The performance may deteriorate.
join_collapse_limit
Parameter description: Specifies whether the optimizer rewrites JOIN constructs (except FULL JOINS) into lists of FROM items based on the number of the items in the result list. This parameter can be set at the PDB level.
Parameter type: integer.
Unit: none
Value range: 1 to 2147483647
Default value: 8. In the PDB scenario, if this parameter is not set, the global setting is inherited.

- Setting this parameter to 1 prevents join reordering. As a result, the join order specified in the query will be the actual order in which the relations are joined. The query optimizer does not always choose the optimal join order. Therefore, advanced users can temporarily set this parameter to 1, and then specify the join order they desire explicitly.
- Smaller values reduce planning time but lead to inferior execution plans.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. Decreasing the value reduces the planning time but may reduce the plan generation quality. Increasing the value increases the planning time but may generate a better plan.
Risks and impacts of improper settings: The trade-off between the planning time and the plan generation quality. Improper settings may cause the effect of one party to be unacceptable.
plan_mode_seed
Parameter description: This is a debugging parameter. Currently, it supports only OPTIMIZE_PLAN and RANDOM_PLAN. This parameter can be set at the PDB level.
Parameter type: integer.
Unit: none
Value range: –1 to 2147483647
- 0: OPTIMIZE_PLAN mode. In this mode, the dynamic planning algorithm is used to estimate the cost and generate the optimal plan.
- -1: RANDOM_PLAN mode. In this mode, a plan is randomly generated, and the seed value for generating a random number is not specified. The optimizer randomly generates an integer value within the range of 1 to 2147483647 and generates a random execution plan based on the random number.
- Integer within the range of 1 to 2147483647: RANDOM_PLAN mode. In this mode, a plan is randomly generated, and the seed value of the seed identifier for generating a random number is specified by the user. The optimizer generates a random execution plan based on the seed value.
Default value: 0. In the PDB scenario, if this parameter is not set, the global setting is inherited.

- If this parameter is not set to 0, the specified plan hint will not be used.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: When this parameter is set to the RANDOM_PLAN mode, the optimizer randomly generates an execution plan. The execution plan may not be the optimal one, which affects the query performance. Therefore, you are advised to set this parameter to 0 during normal service operations or O&M, such as upgrade, scale-out, and scale-in.
Risks and impacts of improper settings: The performance may deteriorate.
hashagg_table_size
Parameter description: Specifies the hash table size during the execution of the HASH JOIN operation. This parameter can be set at the PDB level.
Parameter type: integer.
Unit: none
Value range: 0 to 1073741823. 0 indicates that the database automatically adjusts the size of the hash table as required.
Default value: 0. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. In actual applications, if you encounter specific query scenarios, for example, aggregation operations that process a large amount of data, you may need to manually adjust this parameter to optimize performance.
Risks and impacts of improper settings: Increasing the size of the hash table can reduce the disk I/O in the HASH AGG operation because more data can be retained in the memory. However, if the size of the hash table is too large, too much memory may be occupied, resulting in insufficient memory. If this parameter is set to a small value, the memory may not be effectively used, resulting in more disk I/O operations and decreasing the query speed.
enable_codegen
Parameter description: Specifies whether code optimization can be enabled. Currently, the code optimization uses the LLVM optimization. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on indicates that code optimization is enabled.
- off indicates that code optimization is disabled.
Default value: off. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: If this parameter is enabled, resource usage may increase in expression query scenarios.
codegen_compile_thread_num
Parameter description: Specifies the number of Codegen compilation threads.
Parameter type: integer.
Unit: none
Value range: 1 to 8
Default value: 1
Setting method: This is a SIGHUP parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. Change the value when the number of concurrent services is large and the service performance bottleneck occurs during expression execution.
Risks and impacts of improper settings: If the number of threads is too large, the system performance may deteriorate. However, when there are a large number of concurrent services, you can increase the number of threads to improve the throughput performance.
llvm_max_memory
Parameter description: Specifies the maximum memory occupied by IRs (including cached and in-use IRs) generated during Codegen compilation. The memory used by Codegen is not applied for by preoccupation. It is a part of max_dynamic_memory and is restricted by the llvm_max_memory parameter.
Parameter type: integer.
Unit: KB
Value range: 0 to 2147483647. If the value exceeds the specified value, the original recursive execution logic, instead of the Codegen execution logic, is used. When the upper limit is reached and a downgrade is triggered, decreasing the value of llvm_max_memory cannot immediately release the memory occupied by extra IRs. The memory occupied by IRs is released after the corresponding SQL statements are executed.
Default value: 131072, that is, 128 MB.
Setting method: This is a SIGHUP parameter. Set it based on instructions provided in Table 1. For example, if this parameter is set to 100 without a unit, it indicates 100 KB. If this parameter is set to 16MB, it indicates 16 MB. The unit must be KB, MB, or GB if required.
Setting suggestion: Retain the default value. Change the value when the value of llvm_used_memory in the gs_total_memory_detail view reaches the upper limit of the default value and the service performance bottleneck lies in the expression execution process.
Risks and impacts of improper settings:
- If the parameter is set to an excessively small value, the system does not use the Codegen execution logic, affecting the use of functions.
- If the parameter is set to an excessively small value, LLVM compilation may occupy too many resources of other threads. As a result, the overall system performance deteriorates.
enable_codegen_print
Parameter description: Specifies whether the LLVM IR function can be printed in logs. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on indicates that the IR function can be printed in logs.
- off indicates that the IR function cannot be printed in logs.
Default value: off. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value and enable the parameter during fault locating.
Risks and impacts of improper settings: If this parameter is enabled, a large number of logs are generated, occupying disk I/O resources. As a result, the read and write performance of the database system frontend query deteriorates.
codegen_cost_threshold
Parameter description: The LLVM compilation takes some time to generate executable machine code. Therefore, LLVM compilation is beneficial only when the actual execution cost is more than the sum of the cost required for generating machine code and the optimized execution cost. Parameter codegen_cost_threshold specifies a threshold. If the estimated execution cost exceeds the threshold, LLVM optimization is performed. codegen uses plan_rows of the execution operator as the cost to compare with the value of codegen_cost_threshold. You can run the explain command to view the value of plan_rows. This parameter can be set at the PDB level.
Parameter type: integer.
Unit: none
Value range: 0 to 2147483647
Default value: 100000. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. You can change the value only when you need to change the threshold for triggering the codegen mechanism.
Risks and impacts of improper settings: Change the parameter value after you fully understand the parameter meaning and test the parameter.
enable_bloom_filter
Parameter description: Specifies whether the Bloom filter optimization can be used. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on indicates that the Bloom filter optimization can be used.
- off indicates that the Bloom filter optimization cannot be used.
Default value: on. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Risks and impacts of improper settings: When the data volume is small, using Bloom filter for optimization may cause performance deterioration.
bloom_filter_build_max_rows
Parameter description: Specifies the maximum data threshold on the build side of the hash join for creating a Bloom filter when Bloom filter optimization is enabled. If the data volume on the build side is greater than the value of this parameter, no Bloom filter is created. In the multi-tenancy scenario, this parameter can be set at the PDB level.
Parameter type: integer.
Unit: none
Value range: 1 to 2147483647
Default value: 2000000. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1. This parameter takes effect only when the enable_bloom_filter parameter is enabled.
Setting suggestion: If the storage and computing resources are sufficient, you can increase the value of this parameter to achieve better performance.
Risks and impacts of improper settings: When the storage and computing resources are insufficient, query_dop is high (greater than 16), and the value is too large (greater than 2000000), the system performance may deteriorate.
bloom_filter_apply_threshold
Parameter description: Specifies the minimum data threshold on the apply side of the hash join for creating a Bloom filter when Bloom filter optimization is enabled. If the data volume on the apply side is less than the value of this parameter, no Bloom filter is created. In the multi-tenancy scenario, this parameter can be set at the PDB level.
Parameter type: integer.
Unit: none
Value range: 1 to 2147483647
Default value: 10000. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1. This parameter takes effect only when the enable_bloom_filter parameter is enabled.
Setting suggestion: Set this parameter to an integer ranging from 1000 to 20000.
Risks and impacts of improper settings:If this parameter is set to a large value (greater than 20000), scenarios where performance can be optimized after Bloom filter is added will be lost. If this parameter is set to a small value (smaller than 1000), Bloom filter is added for queries with a small amount of data. As a result, the performance deteriorates.
scan_wait_for_bloom_filter
Parameter description: Specifies whether the scanning operator waits for Bloom filter creation to complete. In the multi-tenancy scenario, this parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on: The operator can start scanning only after Bloom filter is created.
- off: The operator can start scanning without waiting for the completion of Bloom filter creation. After Bloom filter is created, if the scanning is not complete, Bloom filter will be used in the next scanning batch.
Default value: on. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1. This parameter takes effect only when the enable_bloom_filter parameter is enabled.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: Change the parameter value after you fully understand the parameter meaning and test the parameter.
enable_extrapolation_stats
Parameter description: Specifies whether the extrapolation logic is used for data of date type based on historical statistics. If this logic is used, the estimation accuracy can be improved for tables whose statistics are not collected in a timely manner. However, the estimation may be too large due to inference errors. This parameter must be enabled when date-type data is periodically inserted. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on indicates that the extrapolation logic is used.
- off indicates that the extrapolation logic is not used.
Default value: off. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a SUSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. Before enabling enable_extrapolation_stats, you should evaluate the frequency of data changes and the query mode. Enabling this parameter may be helpful if the data changes quickly and process incomplete statistics are required frequently for the query optimizer. Before official launch, adequate testing should be conducted in the test environment to determine the specific impact of enabling extrapolation statistics on performance. After extrapolation statistics are enabled, closely monitor the query performance and the accuracy of database statistics to ensure that no performance problem is introduced or the accuracy of statistics is greatly reduced.
Risks and impacts of improper settings: If this parameter is enabled, the query performance may deteriorate due to inference errors.
query_dop
Parameter description: Specifies the user-defined degree of parallelism (DOP). After the SMP function is enabled, the system uses the specified degree of parallelism. This parameter can be set at the PDB level.
Parameter type: integer.
Unit: none
Value range: 1 to 64. 1 indicates that parallel query is disabled.
Default value: 1. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. When resources such as the CPU, memory, I/O, and network bandwidth are sufficient, a higher degree of parallelism indicates the better performance improvement.
Risks and impacts of improper settings: Change the parameter value after you fully understand the parameter meaning and test the parameter.

After enabling concurrent queries, ensure you have sufficient CPU, memory, and network to achieve the optimal performance.
enable_analyze_check
Parameter description: Specifies whether it is allowed to check whether statistics were collected about tables whose reltuples and relpages are displayed as 0 in pg_class during plan generation. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on indicates that the tables will be checked.
- off indicates that the tables will not be checked.
Default value: off. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a SUSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. Enabling the check may cause plan generation overhead, but ensures that statistics have been collected. If most tables whose reltuples and relpages are 0 in pg_class do not need to collect additional statistics, you can disable the check.
Risks and impacts of improper settings: Change the parameter value after you fully understand the parameter meaning and test the parameter.
enable_sonic_hashagg
Parameter description: Specifies whether to use the hash aggregation operator designed for column-oriented hash tables when certain constraints are met. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on indicates that the hash aggregation operator designed for column-oriented hash tables is used when certain constraints are met.
- off indicates that the hash aggregation operator designed for column-oriented hash tables is not used.

- If enable_sonic_hashagg is enabled and the hash aggregation operator designed based on the column-oriented hash table is used when the query meets the constraint condition, the memory usage of the hash aggregation operator can be reduced. However, in scenarios where enable_codegen is enabled and the performance is significantly improved, the performance of the operator may deteriorate.
- If enable_sonic_hashagg is enabled and the hash agg operator designed based on the column-oriented hash table is used when the query meets the constraint condition, the operator is displayed as Sonic Hash Aggregation in the execution plan and execution information of Explain Analyze/Performance; when the query does not meet the constraint condition, the operator is displayed as Hash Aggregation. For details, see "SQL Optimization > Introduction to the SQL Execution Plan > Description" in Developer Guide.
Default value: on. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: If this parameter is disabled, the query performance may deteriorate in this scenario.
enable_sonic_hashjoin
Parameter description: Specifies whether to use the hash join operator designed for column-oriented hash tables when certain constraints are met. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on indicates that the hash join operator designed for column-oriented hash tables is used when certain constraints are met.
- off indicates that the hash join operator designed for column-oriented hash tables is not used.

- Currently, the parameter can be used only for Inner Join.
- When enable_sonic_hashjoin is enabled, the memory usage of query using the Hash Inner operator can be reduced. However, in scenarios where the code generation technology can significantly improve performance, the performance of the operator may deteriorate.
- When enable_sonic_hashjoin is enabled and the hash join operator designed based on the column-oriented hash table is used when the query meets the constraint condition, the operator is displayed as Sonic Hash Join in the execution plan and execution information of Explain Analyze/Performance; when the query does not meet the constraint condition, the operator is displayed as Hash Join. For details, see "SQL Optimization > Introduction to the SQL Execution Plan > Description" in Developer Guide.
Default value: on. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: If this parameter is disabled, the query performance may deteriorate in this scenario.
enable_sonic_optspill
Parameter description: Specifies whether to optimize the number of files to be written to disks for the hash join operator designed for column-oriented hash tables. If this parameter is enabled, the number of files written to disks does not increase significantly when the hash join operator writes a large number of files to disks. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on indicates that the number of files to be written to disks for the hash join operator designed for column-oriented hash tables is optimized.
- off indicates that the number of files to be written to disks for the hash join operator designed for column-oriented hash tables is not optimized.
Default value: on. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: If this parameter is disabled, the query performance may deteriorate in this scenario.
plan_cache_mode
Parameter description: Specifies the policy for generating an execution plan in the prepared statement. This parameter can be set at the PDB level.
Parameter type: enumerated type
Unit: none
Value range: auto, force_generic_plan, and force_custom_plan
- auto: The optimizer automatically selects a custom plan or generic plan.
- force_generic_plan indicates that the generic plan is forcibly used (soft parse). The generic plan is a plan generated after you run a prepared statement. The plan policy binds parameters to the plan when you run the EXECUTE statement to execute the plan. The advantage of this scheme is that repeated optimizer overheads can be avoided in each execution. The disadvantage is that the plan may not be optimal when data skew occurs for the bound parameters and may result in poor plan execution performance. The bound parameters bind the types of parameters set for the first time. If the type of a parameter set to the same placeholder is different from the previous time, an error is reported.
- force_custom_plan indicates that the custom plan is forcibly used (hard parse). The custom plan is a plan generated after you run a prepared statement where parameters in the EXECUTE statement are embedded. The custom plan generates a plan based on specific parameters in the EXECUTE statement. This scheme generates a preferred plan based on specific parameters each time and has good execution performance. The disadvantage is that the plan needs to be regenerated before each execution, resulting in a large amount of repeated optimizer overhead.

This parameter is valid only for prepared statements. It is used when the parameterized field in a prepared statement has severe data skew.
Default value: auto. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Set this parameter based on the actual service scenario.
Risks and impacts of improper settings: The plan generation overhead may increase or the plan generation quality may deteriorate.
enable_hypo_index
Parameter description: Specifies whether the optimizer creates virtual indexes when executing the EXPLAIN command. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on: A virtual index is created when the EXPLAIN command is executed.
- off: No virtual index is created when the EXPLAIN command is executed.
Default value: off. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. You can enable this parameter when evaluating whether index creation can improve performance.
Risks and impacts of improper settings: Change the parameter value after you fully understand the parameter meaning and test the parameter.
enable_force_vector_engine
Parameter description: Specifies whether to forcibly generate vectorized execution plans for a vectorized execution operator if the operator's child node is a non-vectorized operator. When this parameter is set to on, vectorized execution plans are forcibly generated. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on indicates that vectorized operators are forcibly generated.
- off indicates that the vectorized operator optimizer determines whether to perform vectorization.
Default value: off. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. You are advised to enable this function in vectorized service scenarios.
Risks and impacts of improper settings: If this parameter is enabled, the query performance may deteriorate.
enable_auto_explain
Parameter description: Specifies whether to enable automatic printing of execution plans. This parameter can be used to locate slow stored procedures or queries. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on: enabled.
- off: disabled.
Default value: off. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. If you want to view the execution plan, enable this parameter. However, this causes the current system performance to deteriorate.
Risks and impacts of improper settings: After this function is enabled, the current system performance may deteriorate.
auto_explain_level
Parameter description: Specifies the log level for automatically printing execution plans. This parameter can be set at the PDB level.
Parameter type: enumerated type
Unit: none
Value range:
- log: Execution plans are printed as logs.
- notice: Execution plans are printed as notices.
Default value: log. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: Change the parameter value after you fully understand the parameter meaning and test the parameter.
auto_explain_log_min_duration
Parameter description: Specifies the minimum duration of execution plans that are automatically printed. Only execution plans whose duration is greater than the value of auto_explain_log_min_duration will be printed. For example, if this parameter is set to 0, all executed plans are printed. If this parameter is set to 3000, all executed plans are printed if the execution of a statement takes more than 3000 ms. This parameter can be set at the PDB level.
Parameter type: integer.
Unit: millisecond
Value range: 0 to 2147483647
Default value: 0. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1. For example, if this parameter is set to 100 without a unit, it indicates 100 ms. If this parameter is set to 2min, it indicates 2 minutes. The unit must be ms, s, min, h, or d if required.
Setting suggestion: Retain the default value. You can adjust the value based on service requirements to output slow query statements.
Risks and impacts of improper settings: If the value is too small, too much output content may be generated.
enable_smp_dml
Parameter description: Specifies whether DML statements can be executed in parallel.
Parameter type: Boolean.
Unit: none
Value range:
- on: DML statements can be executed in parallel.
- off: DML statements cannot be executed in parallel.
Default value: on
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: If this parameter is disabled, the performance may deteriorate.
enable_startwith_debug
Parameter description: Specifies whether to display information about START WITH and CONNECT BY for debugging. If this parameter is enabled, information about all tail columns related to the START WITH and CONNECT BY features is displayed. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on: enabled.
- off: disabled.
Default value: off. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value and enable the parameter during fault locating.
Risks and impacts of improper settings: Change the parameter value after you fully understand the parameter meaning and test the parameter.
enable_inner_unique_opt
Parameter description: Specifies whether to optimize Inner Unique for nested-loop join, hash join, and sort merge join, that is, whether to reduce the number of matching times when the attribute corresponding to the inner table in the join condition meets the uniqueness constraint. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on: used.
- off: not used.
Default value: on. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: If this parameter is disabled, the query performance of the corresponding scenarios may deteriorate.
enable_indexscan_optimization
Parameter description: Specifies whether to optimize B-tree index scan (IndexScan and IndexOnlyScan) in the Astore. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on: used.
- off: not used.
Default value: on. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: If this parameter is disabled, the query performance of the corresponding scenarios may deteriorate.
enable_uniq_idx_a_compat
Parameter description: Specifies whether composite unique indexes are compatible with the A database for NULL values. In the multi-tenancy scenario, this parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on: compatible.
- off: incompatible.
Default value: off. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: This parameter is applicable only to A-compatible databases. Retain the default value.
Risks and impacts of improper settings: Improper settings may affect compatibility or increase query planning overheads.
immediate_analyze_threshold
Parameter description: Specifies the threshold for automatically analyzing inserted data. When the amount of data inserted at a time reaches the original data amount multiplied by the value of immediate_analyze_threshold and the total number of rows of the original and new data exceeds 100, ANALYZE is automatically triggered.
Parameter type: integer.
Unit: none
Value range: 0 to 1000. If this parameter is set to 0, this function is disabled.
Default value: 0
Setting method: This is a SIGHUP parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Set this parameter to a small value for tables whose data changes rapidly and whose statistics need to be updated continuously. Set this parameter to a large value for tables whose statistics fluctuate greatly only after a certain amount of data is reached.
Risks and impacts of improper settings: If the value is too large, statistics may not be updated in a timely manner. If the value is too small, the overhead of statistics analysis may be high.

- This function supports only permanent and unlogged tables. Temporary tables are not supported.
- ANALYZE is not automatically triggered twice within 10 seconds for the same table.
enable_invisible_indexes
Parameter description: Specifies whether the optimizer can use invisible indexes. This parameter can be set at the PDB level.

After an index is set to invisible, the performance of query statements may be affected. If you do not want to change the index visibility status and want to use invisible indexes, set enable_invisible_indexes to on.
Parameter type: Boolean.
Unit: none
Value range:
- on: The optimizer can use invisible indexes.
- off: The optimizer cannot use invisible indexes.
Default value: off. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: If invisible indexes are used but this parameter is not enabled, invisible indexes may be ignored. Therefore, a better plan may not be considered.
enable_dynamic_samplesize
Parameter description: Specifies whether to dynamically adjust the number of sampled rows. For a large table with more than one million rows, the number of sampled rows is dynamically adjusted during statistics collection to improve statistics accuracy. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on: indicates that the function is enabled.
- off: indicates that the function is disabled.
Default value: on. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: If this parameter is disabled, the accuracy of statistics may decrease.

The function of dynamically adjusting the number of sampled rows supports only absolute sampling.
stats_history_record_limit
Parameter description: Specifies the maximum number of historical statistics that can be retained for each object (including tables, columns, partitions, and indexes). When collecting statistics about an object, the system saves the statistics to the historical statistics table. When the number of statistics about the object in the historical statistics table reaches the threshold and new statistics are collected, the earlier statistics are deleted.
Parameter type: integer.
Unit: none
Value range: 0 to 100
Default value: 10
Setting method: This is a SIGHUP parameter. Set it based on instructions provided in Table 1.
Setting suggestion: The default value is recommended. If you need to record statistics of more historical versions, increase the value of this parameter. However, the performance of ANALYZE may be affected.
Risks and impacts of improper settings: If the value is too large, the ANALYZE performance may be affected.
stats_history_retention_time
Parameter description: Specifies the retention period of historical statistics about each object (including tables, columns, partitions, and indexes). When collecting statistics about an object, the system saves the statistics to the historical statistics table. If the retention period of the statistics about the object in the historical statistics table exceeds the threshold, the system deletes the statistics that exceed the retention period when collecting new statistics.
Parameter type: floating point.
Unit: day
Value range: –1 or a value ranging from 0 to 365000. –1 indicates that the history statistics are not cleared over time.
Default value: 31
Setting method: This is a SIGHUP parameter. Set it based on instructions provided in Table 1.
Setting suggestion: The default value is recommended. If you need to record statistics of earlier versions, increase the value of this parameter. However, the performance of ANALYZE may be affected.
Risks and impacts of improper settings: If the value is too large, the ANALYZE performance may be affected.
default_statistic_granularity
Parameter description: Specifies which partition-level statistics of a partitioned table are collected by default when PARTITION MODE is not specified. This parameter does not take effect for non-partitioned tables. This parameter can be set at the PDB level.
Parameter type: enumerated type
Unit: none
Value range:
- all: collects statistics about the entire table, level-1 partitions, and level-2 partitions.
- global: Statistics of the entire table are collected.
- partition: Statistics of the level-1 partition are collected.
- global_and_partition: Statistics about the entire table and level-1 partitions are collected.
- subpartition: Statistics about level-2 partitions are collected.
- all_complete: collects statistics about the entire table, level-1 partitions, and level-2 partitions.
Default value: all. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value. If partition-level statistics need to be collected, you can set this parameter as required. However, the ANALYZE performance may be affected.
Risks and impacts of improper settings: You need to balance the maintenance overhead and the accuracy of statistics. Improper settings may cause high costs for one party.
enable_fast_numeric_agg
Parameter description: Specifies whether to enable aggregation optimization for the numeric data type. In the multi-tenancy scenario, this parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on: The aggregation optimization is enabled for the numeric data type.
- off: The aggregation optimization is disabled for the numeric data type.
Default value: on. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: If this parameter is disabled, the query performance of the corresponding scenarios may deteriorate.
planmgr_gplan_cost_max_ratio
Parameter description: Specifies the cost upper limit of the detected generic plan. If the cost of the generic plan is planmgr_gplan_cost_max_ratio times higher than that of the average custom plan, only the custom plan is used. If this parameter is set to a value smaller than 1e-6, the attempts of the general plan are not restricted.
Parameter type: floating point.
Unit: none
Value range: 0 to DBL_MAX
Default value: 5
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: If the value is too small, the performance overhead of executing the generic plan may increase during plan selection.
enable_planmgr_cplan_opt
Parameter description: Specifies whether to use an adaptive plan for interception when the optimizer uses a custom plan. If the interception is successful, the adaptive plan is used. Otherwise, the custom plan is still used.
Parameter type: Boolean.
Unit: none
Value range:
- on: The adaptive plan is not used for interception.
- off: The adaptive plan is used for interception.
Default value: on
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: none
enable_poisson_outer_optimization
Parameter description: Specifies whether to use Poisson to correct the selectivity of the filter condition in the foreign table in outer join when the selectivity is calculated in the plan generation phase.
Parameter type: Boolean.
Unit: none
Value range:
- on: Poisson is not used to modify the selectivity of the filter condition of the foreign table in outer join. That is, the filter condition of the foreign table is not calculated as pushdown.
- off: The selectivity is calculated based on the filter condition of the outer join foreign table.
Default value: off for an upgraded database instance and on for a newly installed database instance.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: A good execution plan may not be selected, and slow SQL statements may occur.

The related functions take effect only when cost_model_version is greater than or equal to 5 or is set to 0 and enable_poisson_outer_optimization is set to on.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot