Memory
This section describes memory parameters.

These parameters, except local_syscache_threshold, take effect only after the database restarts.
memorypool_enable
Parameter description: Specifies whether to enable a memory pool.
Parameter type: Boolean.
Unit: none
Value range:
- on: The memory pool is enabled.
- off: The memory pool is disabled.
Default value: off
Setting method: This is a POSTMASTER parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.
memorypool_size
Parameter description: Specifies the memory pool size.
Parameter type: integer
Unit: KB
Value range: 131072–1073741823
Default value: 524288 (512 MB)
Setting method: This is a POSTMASTER parameter. Set it based on instructions provided in Table 1. For example, if the value is 131072 without a unit, memorypool_size indicates 131072 KB. If the value is 128MB, memorypool_size indicates 128 MB. The unit must be KB, MB, or GB if required.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.
enable_memory_limit
Parameter description: Specifies whether to enable the logical memory management module. This parameter must be enabled for the multi-tenant database feature (enable_mtd).

Fixed cost exists, that is, shared_buffers and metadata (about 200 MB). If max_process_memory minus the fixed cost is less than 2 GB, GaussDB forcibly sets enable_memory_limit to off. Metadata is the memory used in GaussDB and is related to some concurrent parameters, such as max_connections, thread_pool_attr, and max_prepared_transactions.
Parameter type: Boolean.
Unit: none
Value range:
- on: The logical memory management module is enabled.
- off: The logical memory management module is disabled.
Default value: on
Setting method: This is a POSTMASTER parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: If this parameter is set to off, the memory used by the database is not limited. When a large number of concurrent or complex queries are performed, too much memory is used, which may cause OS OOM problems.
max_process_memory
Parameter description: Specifies the maximum physical memory of a database node.
Parameter type: integer
Unit: KB
Value range: 2097152 to 2147483647
Default value:
12GB
Setting method: This is a POSTMASTER parameter. Set it based on instructions provided in Table 1.
Setting suggestion: This parameter is used to prevent node OOM caused by memory bloat, ensuring OS reliability. The value of this parameter on the database node depends on the physical memory of the OS and the number of primary database nodes deployed on a single node. The recommended formula is as follows: (Physical memory size – vm.min_free_kbytes) x 0.7/Number of primary nodes. vm.min_free_kbytes in this formula indicates that the OS memory reserved for the kernel to receive and send data. Its value is at least 5% of the total memory. That is, max_process_memory = Physical memory size x 0.665/Number of nodes.
Risks and impacts of improper settings: If this parameter is set to an improper value, for example, greater than the physical memory of the server, OS OOM problems may occur.
Settings in the centralized scenario (single node, 1 DN):
The following typical scenario settings are the optimal output values provided after the recommended specifications are tested and optimized. For details, see Table 1.
Resource Setting |
Recommended Value |
---|---|
256 vCPUs/2048 GB |
1800 GB |
192 vCPUs/1536 GB |
1400 GB |
128 vCPUs/1024 GB |
900 GB |
104 vCPUs/1024 GB |
900 GB |
96 vCPUs/1024 GB |
900 GB |
96 vCPUs/768 GB |
680 GB |
80 vCPUs/640 GB |
560 GB |
64 vCPUs/512 GB |
450 GB |
60 vCPUs/480 GB |
420 GB |
32 vCPUs/256 GB |
200 GB |
16 vCPUs/128 GB |
90 GB |
8 vCPUs/64 GB |
40 GB |
4 vCPUs/32 GB |
20 GB |
4 vCPUs/16 GB |
10 GB |
local_syscache_threshold
Parameter description: Specifies the maximum memory occupied by the system cache of a single session. When the actual system cache size exceeds the threshold, memory eviction is triggered. If the enable_global_plancache parameter is enabled, the value of local_syscache_threshold does not take effect when it is less than 16MB. In this case, the minimum value of this parameter is 16MB to ensure that GPC takes effect. If enable_global_syscache and enable_thread_pool are enabled, this parameter indicates the total cache size of the current thread and sessions bound to the current thread. This parameter can be set at the PDB level.
Parameter type: integer
Unit: KB
Value range: 1024 to 524288
Default value:
32MB (196-core CPU/1536 GB memory); 16MB (128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, 96-core CPU/1024 GB memory, 96-core CPU/768 GB memory, 80-core CPU/640 GB memory, 64-core CPU/512 GB memory, 60-core CPU/480 GB memory, 32-core CPU/256 GB memory, 16-core CPU/128 GB memory, 8-core CPU/64 GB memory, 4-core CPU/32 GB memory, 4-core CPU/16 GB memory). In the PDB scenario, if this parameter is not set, the default value 16MB is used.
Setting method: This is a SIGHUP parameter. Set it based on instructions provided in Table 1. For example, if the value is 1024 without a unit, local_syscache_threshold indicates 1024 KB. If the value is 1MB, local_syscache_threshold indicates 1 MB. The unit must be KB, MB, or GB if required.
Setting suggestion: You are advised to set this parameter to an integer multiple of 1024. The background noise memory is 2 MB. Each time a table or index is added, the memory increases by 11 KB.
Risks and impacts of improper settings: If the value is too small, memory is frequently evicted, affecting performance.
enable_memory_context_control
Parameter description: Enables the function of checking whether the number of memory contexts exceeds the specified limit. This parameter applies only to the DEBUG version.
Parameter type: Boolean.
Unit: none
Value range:
- on: The function of checking the number of memory contexts is enabled.
- off: The function of checking the number of memory contexts is disabled.
Default value: off
Setting method: This is a SIGHUP parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.
uncontrolled_memory_context
Parameter description: Specifies which memory context will not be checked when the function of checking whether the number of memory contexts exceeds the specified limit is enabled. This parameter applies only to the DEBUG version. This parameter can be set at the PDB level.

When the parameter is queried, the title meaning string "MmgrMemoryController white list:" is added to the beginning of the parameter value.
Parameter type: string.
Unit: none
Value range: a string.
Default value: "". In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.
huge_pages
Parameter description: Specifies whether to enable huge pages of static memory for the shared memory of the database.

To enable the static huge page memory function, you need to set sufficient static huge page memory resources in the OS kernel in advance. The related parameters are vm.nr_hugepages and vm.nr_overcommit_hugepages, which are set in the /etc/sysctl.conf file.
Parameter type: string.
Unit: none
Value range:
- on: enables the function of applying for huge pages of static memory. If the application fails, the database fails to be started.
- off: disables the function of applying for huge pages of static memory and common shared memory is applied for.
- try: attempts to apply for huge page resources of static memory. If the application fails, the system applies for common shared memory.
Default value: try
Setting method: This is a POSTMASTER parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.
shared_buffers
Parameter description: Specifies the size of shared memory used by GaussDB. Increasing the value of this parameter causes GaussDB to request more System V shared memory than the default configuration allows.
Parameter type: integer
Unit: page (8 KB)
Value range: 16–1073741823
Default value: 8MB
Setting method: This is a POSTMASTER parameter. Set it based on instructions provided in Table 1. For example, if the value is 1024 without a unit, shared_buffers indicates 8192 KB (1024 x 8 KB). If the value is 8MB, shared_buffers indicates 8 MB. If the value contains a unit, the value must be KB, MB, or GB and must be an integer multiple of 8 KB.
Setting suggestion:
- Set shared_buffers to a value less than 40% of max_process_memory.
- If shared_buffers is set to a larger value, increase the value of checkpoint_segments because a longer interval is required to write a large amount of new or changed data.
- If the process fails to be restarted after the value of shared_buffers is changed, perform either of the following operations based on the error information:
- Adjust the kernel.shmall, kernel.shmmax, and kernel.shmmin OS parameters. For details, see "Configuring Other OS Parameters" in Installation Guide.
- Run the free -g command to check whether the available memory and swap space of the OS are sufficient. If the memory is insufficient, manually stop other user programs that occupy much memory.
- Set this parameter to the recommended default value for different specifications. You are advised not to change the value of shared_buffers. Otherwise, it may be too large or too small. The following condition must be met: data_replicate_buffer_size + segment_buffers + shared_buffers + wal_buffers + temp_buffers + maintenance_work_mem + work_mem + query_mem + wal_receiver_buffer_size (of the standby node) < max_process_memory < Memory size of the physical machine.
Risks and impacts of improper settings: If the value of the memory parameter is too large and exceeds the upper limit of the physical memory, the database cannot be started because it cannot apply for sufficient memory.
Settings in the centralized scenario (single node, 1 DN):
The following typical scenario settings are the optimal output values provided after the recommended specifications are tested and optimized. For details, see Table 2.
Resource Setting |
Recommended Value |
---|---|
256 vCPUs/2048 GB |
720 GB |
192 vCPUs/1536 GB |
560 GB |
128 vCPUs/1024 GB |
360 GB |
104 vCPUs/1024 GB |
360 GB |
96 vCPUs/1024 GB |
360GB |
96 vCPUs/768 GB |
270GB |
80 vCPUs/640 GB |
220GB |
64 vCPUs/512 GB |
180GB |
60 vCPUs/480 GB |
160GB |
32 vCPUs/256 GB |
80GB |
16 vCPUs/128 GB |
36GB |
8 vCPUs/64 GB |
16 GB |
4 vCPUs/32 GB |
8 GB |
4 vCPUs/16 GB |
2 GB |
page_version_check
Parameter description: Specifies whether to perform verification for underlying storage faults and pages not marked as dirty based on page version information. page_version_check is a level-3 switch. The verification for underlying storage faults is to check whether a page read from the underlying storage is of a correct version, which prevents loss of page version information caused by a fault such as a disk power failure. The verification for pages not marked as dirty is to check whether modified pages are not marked as dirty, and is controlled by an independent switch page_missing_dirty_check.
Parameter type: enumerated type
Unit: none
Value range:
- off: The verification for underlying storage faults and pages not marked as dirty is disabled.
- memory: The page version verification function (that is, verification for underlying storage faults and pages not marked as dirty) in pure memory mode is enabled. The page version information is cached only in the memory and will be lost after a restart.
- persistence: The persistent page version verification function (that is, verification for underlying storage faults and pages not marked as dirty) is enabled. The page version information is persisted to files and will not be lost after a restart.
Default value: persistence
Setting method: This is a POSTMASTER parameter. Set it based on instructions provided in Table 1.
Setting suggestions: Set this parameter based on different specifications, that is, off (four-core CPU/16 GB memory, four-core CPU/32 GB memory, and eight-core CPU/64 GB memory) or persistence (16-core CPU/128 GB memory, 32-core CPU/256 GB memory, 60-core CPU/480 GB memory, 64-core CPU/512 GB memory, 72-core CPU/576 GB memory, 80-core CPU/640 GB memory, and 96-core CPU/768 GB memory, 96-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, 128-core CPU/1024 GB memory, and 196-core CPU/1536 GB memory).
Risks and impacts of improper settings: Setting this parameter to memory and persistence affects the performance of a device, and the one with smaller specifications suffers more. (For example, the performance of a 16-core CPU/128 GB storage device using TPC-C model is about 7%.) If the system needs to restart frequently, you are advised to set this parameter to persistence to ensure that the version information on the page is not lost. However, the performance will be affected.
page_missing_dirty_check
Parameter description: Checks whether the modified pages are not marked as dirty. page_missing_dirty_check is controlled by page_version_check. If page_version_check is set to off, setting page_missing_dirty_check to on does not take effect.
Parameter type: Boolean.
Unit: none
Value range:
- on: The verification for pages not marked as dirty is performed.
- off: The verification for pages not marked as dirty is not performed.
Default value: off
Setting method: This is a POSTMASTER parameter. Set it based on instructions provided in Table 1.
Setting suggestion: You are advised to enable this function in test scenarios to detect as many pages not marked as dirty that lead to code bugs in non-production environments as possible. On the live network, this function is disabled by default to avoid extra cost and performance deterioration.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.
page_version_max_num
Parameter description: Specifies the maximum number of page versions that can be cached in the memory. This parameter is valid only when page_version_check is not set to off. Each page_version occupies 36 bytes of memory. Pay attention to the memory usage.
Parameter type: integer
Unit: none
Value range: 0 to 2147483647.
- 0: The value is automatically calculated based on the value of shared_buffers using the following formula: page_version_max_num = shared_buffers x 2. For example, 32 MB of shared_buffers corresponds to 4096 buffers. Therefore, the value of this parameter is set to 8192.
- Non-zero values: The manually configured value is forcibly used. The parameter cannot be set a value less than 16 times of page_version_partitions. Otherwise, it is forcibly set to a value equal to the value of page_version_partitions multiplied by 16.
Default value: 0
Setting method: This is a POSTMASTER parameter. Set it based on instructions provided in Table 1.
Setting suggestion: If the memory is limited, the valid value must be twice to four times the value of shared_buffers. Setting suggestion: If high performance is required and the memory is sufficient, you are advised to manually set this parameter to a value larger than four times the value of shared_buffers and the ratio range of this parameter to page_version_partitions is 256 to 1024.
Risks and impacts of improper settings: If the value of the memory parameter is too large and exceeds the upper limit of the physical memory, the database cannot be started because it cannot apply for sufficient memory.
page_version_partitions
Parameter description: Specifies the number of hash table partitions in cached page version information in the memory. This parameter directly affects the hash query efficiency and hash conflict probability. This parameter is valid only when page_version_check is not set to off.
Parameter type: integer
Unit: none
Value range: 0–2097152
- 0: The value is automatically calculated based on the value of page_version_max_num using the following formula: page_version_partitions = page_version_max_num/512. If the automatically calculated value is smaller than 4, the parameter is forcibly set to 4.
- Non-zero values: The manually configured value is forcibly used. The minimum value is 4. If the value is less than 4, it is forcibly set to 4.
Default value: 0
Setting method: This is a POSTMASTER parameter. Set it based on instructions provided in Table 1.
Setting suggestion: If you have high performance requirements, you are advised to manually set this parameter to a value 1/256 to 1/1024 of the value of page_version_max_num.
Risks and impacts of improper settings: If the number of partitions is too large, the hash query efficiency decreases. If the number of partitions is too small, the probability of hash conflicts increases.
page_version_recycler_thread_num
Parameter description: Specifies the number of threads for recycling and verifying page version information. This parameter is valid only when page_version_check is not set to off.
Parameter type: integer
Unit: none
Value range: 0–16
- If page_version_check is set to memory:
- 0: The value is automatically calculated based on the value of page_version_partitions using the following formula: page_version_recycler_thread_num = page_version_partitions/16384. If the automatically calculated value is greater than 4, the parameter is forcibly set to 4.
- Non-zero values: The manually configured value is forcibly used.
- The parameter cannot be set to a value greater than the value of page_version_partitions. Otherwise, it is forcibly set to a value equal to that of page_version_partitions.
- If page_version_check is set to persistence:
If the value is less than 2, set this parameter to 2. If the value is greater than or equal to 2, the manually configured parameter value is forcibly used.
Default value: 0
Setting method: This is a POSTMASTER parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.
enable_cached_context
Parameter description: Specifies the number of cached memory contexts. In the multi-tenancy scenario, this parameter can be set at the PDB level.
Parameter type: integer
Unit: none
Value range: 0 to 1024
- If this parameter is set to 0, the memory context optimization framework does not take effect.
- If this parameter is set to a non-zero value, the memory context optimization framework takes effect, and the value of this parameter is the number of cached memory contexts.
Default value: 128. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.
verify_log_buffers
Parameter description: Specifies the size of the verifyLog buffer. This parameter is valid only when page_version_check is set to persistence. The verifyLog buffer memory is managed by page, and each page is 8 KB.
Parameter type: integer
Unit: page (8 KB)
Value range: 4-262144
Default value: 4 (32 KB)
Setting method: This is a POSTMASTER parameter. Set it based on instructions provided in Table 1. For example, the value 131072 of verify_log_buffers indicates that the size is 131072 x 8 KB = 1GB, and the value 131072kB indicates that verify_log_buffers is 131072 KB. If the value contains a unit, the value must be KB, MB, or GB and must be an integer multiple of 8 KB.
Setting suggestion: Set this parameter based on the system hardware specifications.
1GB (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, 96-core CPU/1024 GB memory, 96-core CPU/768 GB memory, 80-core CPU/640 GB memory, 64-core CPU/512 GB memory, 60-core CPU/480 GB memory, 32-core CPU/256 GB memory); 512MB (16-core CPU/128 GB memory); 256MB (8-core CPU/64 GB memory); 128MB (4-core CPU/32 GB memory); 16MB (4-core CPU/16 GB memory)
Risks and impacts of improper settings: If the value of the memory parameter is too large and exceeds the upper limit of the physical memory, the database cannot be started because it cannot apply for sufficient memory.
segment_buffers
Parameter description: Specifies the memory size of a GaussDB segment-page metadata page.
Parameter type: integer
Unit: page (8 KB)
Value range: 16–1073741823
Default value:
8MB (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, 96-core CPU/1024 GB memory, 96-core CPU/768 GB memory, 80-core CPU/640 GB memory, 64-core CPU/512 GB memory, 60-core CPU/480 GB memory, 32-core CPU/256 GB memory, 16-core CPU/128 GB memory, 8-core CPU/64 GB memory, 4-core CPU/32 GB memory); 128KB (4-core CPU/16 GB memory)
Setting method: This is a POSTMASTER parameter. Set it based on instructions provided in Table 1. For example, if the value is 1024 without a unit, segment_buffers indicates 8192 KB (1024 x 8 KB). If the value is 8MB, segment_buffers indicates 8 MB. If the value contains a unit, the value must be KB, MB, or GB and must be an integer multiple of 8 KB.
Setting suggestions: segment_buffers is used to cache the content of segment-page headers, which is key metadata information. To improve performance, it is recommended that the segment headers of ordinary tables be cached in the buffer and not be replaced. You are advised to set this parameter based on the following formula: Number of tables (including indexes and TOAST tables) x Number of partitions x 3 + 128. Multiplying by 3 is because each table (partition) has some extra metadata segments. Generally, a table has three segments. Adding 128 at last is because segment-page tablespace management requires a certain number of buffers. If this parameter is set to a small value, it takes a long time to create a segment-page table for the first time. Therefore, you are advised to retain the default value to avoid setting segment_buffers to an excessively large or small value. The following condition must be met: data_replicate_buffer_size + segment_buffers + shared_buffers + wal_buffers + temp_buffers + maintenance_work_mem + work_mem + query_mem + wal_receiver_buffer_size (of the standby node) < max_process_memory < Memory size of the physical machine.
Risks and impacts of improper settings: If the value of the memory parameter is too large and exceeds the upper limit of the physical memory, the database cannot be started because it cannot apply for sufficient memory.
bulk_read_ring_size
Parameter description: Specifies the size of the ring buffer used by the operation when a large amount of data is queried (for example, during large table scanning). This parameter can be set at the PDB level.
Parameter type: integer
Unit: KB
Value range: 256 to 2147483647
Default value: 16384 (that is, 16 MB) In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1. For example, if the value is 1024 without a unit, bulk_read_ring_size indicates 1024 KB. If the value is 1MB, bulk_read_ring_size indicates 1 MB. The unit must be KB, MB, or GB if required.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.
bulk_write_ring_size
Parameter description: Specifies the size of the ring buffer used by the operation when a large amount of data is written (for example, the copy operation). This parameter can be set at the PDB level.
Parameter type: integer
Unit: KB
Value range: 16384 to 2147483647
Default value: 2097152 (2 GB). In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1. For example, if the value is 16384 without a unit, bulk_write_ring_size indicates 16384 KB. If the value is 16MB, bulk_write_ring_size indicates 16 MB. The unit must be KB, MB, or GB if required.
Setting suggestion: Increase the value of this parameter on database nodes if a huge amount of data will be imported.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.
standby_shared_buffers_fraction
Parameter description: Specifies the shared_buffers proportion used on the server where a standby instance is deployed.
Parameter type: floating point
Unit: none
Value range: 0.1 to 1.0
Default value: 1
Setting method: This is a SIGHUP parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.
temp_buffers
Parameter description: Specifies the maximum size of local temporary buffers used by a database session. This parameter can be set at the PDB level.

- temp_buffers can be modified only before the first use of temporary tables within each session. Subsequent attempts to change the value of this parameter will not take effect on that session.
- A session allocates temporary buffers based on the value of temp_buffers. If a large value is set in a session that does not require many temporary buffers, only the cost of one buffer descriptor is added. If a buffer is used, additional 8192 bytes will be consumed for it.
Parameter type: integer
Unit: page (8 KB)
Value range: 100 to 1073741823
Default value: 128 (1 MB) In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1. For example, if the value is 100 without a unit, temp_buffers indicates 800 KB (100 x 8 KB). If the value is 16MB, temp_buffers indicates 16 MB. If the value contains a unit, the value must be KB, MB, or GB and must be an integer multiple of 8 KB.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.
max_prepared_transactions
Parameter description: Sets the maximum number of transactions that can be in the "prepared" state simultaneously. Increasing the value of this parameter causes GaussDB to request more System V shared memory than the default configuration allows.
Parameter type: integer
Unit: none
Value range: 0 to 262143
Default value:
200 (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, 96-core CPU/1024 GB memory, 96-core CPU/768 GB memory, 80-core CPU/640 GB memory, 64-core CPU/512 GB memory, 60-core CPU/480 GB memory, 32-core CPU/256 GB memory, 16-core CPU/128 GB memory, 8-core CPU/64 GB memory, 4-core CPU/32 GB memory); 0 (4-core CPU/16 GB memory)
Setting method: This is a POSTMASTER parameter. Set it based on instructions provided in Table 1.
Setting suggestions: Generally, explicit PREPARE operations are not required for transactions. If explicit PREPARE operations are performed for transactions, set this parameter to a value greater than the number of concurrent services that require PREPARE to prevent preparation failures.
Risks and impacts of improper settings: When GaussDB is deployed as an HA system, set this parameter on standby nodes to a value greater than or equal to that on primary nodes. Otherwise, queries will fail on the standby nodes.
work_mem
Parameter description: Specifies the amount of memory to be used by internal sort operations and hash tables before they write data into temporary disk files. Sort operations are required for ORDER BY, DISTINCT, and merge joins. Hash tables are used in hash joins, hash-based aggregation, and hash-based processing of IN subqueries.
In a complex query, several sort or hash operations may run in parallel; each operation will be allowed to use as much memory as this parameter specifies. If the memory is insufficient, data will be written into temporary files. In addition, several running sessions could be performing such operations concurrently. Therefore, the total memory used may be many times the value of work_mem. This parameter can be set at the PDB level.
Parameter type: integer
Unit: KB
Value range: 64 to 2147483647
Default value:
280MB (196-core CPU/1536 GB memory); 256MB (128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, 96-core CPU/1024 GB memory, 96-core CPU/768 GB memory); 128MB (80-core CPU/640 GB memory, 64-core CPU/512 GB memory, 60-core CPU/480 GB memory, 32-core CPU/256 GB memory, 16-core CPU/128 GB memory); 64MB (8-core CPU/64 GB memory) ; 32MB (4-core CPU/32 GB memory); 16MB (4-core CPU/16 GB memory)
In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1. For example, if the value is 1024 without a unit, work_mem indicates 1024 KB. If the value is 1MB, work_mem indicates 1 MB. The unit must be KB, MB, or GB if required.
Setting suggestion:
If the physical memory specified by work_mem is insufficient, additional operator calculation data will be written into temporary tables based on query characteristics and the degree of parallelism. This reduces performance by five to ten times, and prolongs the query response time from seconds to minutes.
- For complex serial queries, each query requires five to ten associated operations. Set work_mem using the following formula: work_mem = 50% of the memory/10.
- For simple serial queries, each query requires two to five associated operations. Set work_mem using the following formula: work_mem = 50% of the memory/5.
- For concurrent queries, set work_mem using the following formula: work_mem = work_mem for serial queries/Number of concurrent SQL statements.
- BitmapScan hash tables are also restricted by work_mem, but will not be forcibly flushed to disks. In the case of complete lossify, every 1 MB memory occupied by the hash table corresponds to a 16 GB page of BitmapHeapScan (32 GB for Ustore). After the upper limit of work_mem is reached, the memory increases linearly with the data access traffic based on this ratio.
Risks and impacts of improper settings: The default value is the recommended value. You are advised not to change it. Before changing the values, check the specifications of the parameters and ensure that the hardware resources are sufficient. Otherwise, the database may be abnormal.
query_mem
Parameter description: Specifies the memory used by a query. This parameter can be set at the PDB level.
Parameter type: integer
Unit: KB
Value range: 0 to 2147483647
- If the value of query_mem is greater than 32768 (that is, 32 MB), the optimizer adjusts the memory cost estimate to this value when generating an execution plan.
- If the value is set to a negative value or a positive integer less than 32768 (that is, 32 MB), the default value 0 is forcibly used. In this case, the optimizer does not adjust the estimated query memory.
Default value: 0. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1. For example, if the value is 32768 without a unit, query_mem indicates 32768 KB. If the value is 32MB, query_mem indicates 32 MB. The unit must be KB, MB, or GB if required.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.
query_max_mem
Parameter description: Specifies the maximum memory that can be used by a query. This parameter can be set at the PDB level.
Parameter type: integer
Unit: KB
Value range: 0 to 2147483647
- If the value of query_max_mem is greater than 32768 (that is, 32 MB), an error is reported when the query memory usage exceeds the value.
- If the value is set to a negative value or a positive integer less than 32768 (that is, 32 MB), the default value 0 is used. In this case, the optimizer does not limit the query memory.
Default value: 0. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1. For example, if the value is 32768 without a unit, query_max_mem indicates 32768 KB. If the value is 32MB, query_max_mem indicates 32 MB. The unit must be KB, MB, or GB if required.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.
maintenance_work_mem
Parameter description: Specifies the maximum amount of memory to be used by maintenance operations, such as VACUUM, CREATE INDEX, and ALTER TABLE ADD FOREIGN KEY. This parameter may affect the execution efficiency of VACUUM, VACUUM FULL, CLUSTER, and CREATE INDEX.
Parameter type: integer
Unit: KB
Value range: 1024 to 2147483647
Default value:
2GB (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, 96-core CPU/1024 GB memory, 96-core CPU/768 GB memory, 80-core CPU/640 GB memory, 64-core CPU/512 GB memory, 60-core CPU/480 GB memory); 1GB (32-core CPU/256 GB memory); 512MB (16-core CPU/128 GB memory); 256MB (8-core CPU/64 GB memory); 128MB (4-core CPU/32 GB memory); 64MB (4-core CPU/16 GB memory)
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1. For example, if the value is 1024 without a unit, maintenance_work_mem indicates 1024 KB. If the value is 1MB, maintenance_work_mem indicates 1 MB. The unit must be KB, MB, or GB if required.
Setting suggestion:
- The value of this parameter must be greater than that of work_mem so that database dumps can be more quickly cleared or restored. In a database session, only one maintenance operation can be performed at a time. Maintenance is usually performed when there are not many running sessions.
- When the Autovacuum process is running, up to autovacuum_max_workers times this memory may be allocated. In this case, set maintenance_work_mem to a value greater than or equal to that of work_mem.
- If a large amount of data is to be clustered, increase the value of this parameter in the session.
- Use the default value. You are advised not to change the value. Otherwise, set maintenance_work_mem to an excessively large or small value. The following condition must be met: data_replicate_buffer_size + segment_buffers + shared_buffers + wal_buffers + temp_buffers + maintenance_work_mem + work_mem + query_mem + wal_receiver_buffer_size (of the standby node) < max_process_memory < Memory size of the physical machine.
Risks and impacts of improper settings: The default value is the recommended value. You are advised not to change it. Before changing the values, check the specifications of the parameters and ensure that the hardware resources are sufficient. Otherwise, the database may be abnormal.
max_stack_depth
Parameter description: Specifies the maximum safe depth of the GaussDB execution stack. The safety margin is required because the stack depth is not checked in every routine in the server, but only in key potentially-recursive routines, such as expression evaluation. This parameter can be set at the PDB level.
Parameter type: integer
Unit: KB
Value range: 100–2147483647
Default value:
- If the value of ulimit -s minus 640 KB is greater than or equal to 2 MB, the default value of this parameter is 2 MB.
- If the value of ulimit -s minus 640 KB is less than 2 MB, the default value of this parameter is the value of ulimit -s minus 640 KB.
- In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a SUSET parameter. Set it based on instructions provided in Table 1. For example, if the value is 1024 without a unit, max_stack_depth indicates 1024 KB. If the value is 1MB, max_stack_depth indicates 1 MB. The unit must be KB, MB, or GB if required.
Setting suggestion:
- The database needs to reserve 640 KB stack depth. Therefore, the ideal value of this parameter is the actual stack size limit enforced by the OS kernel (as set by ulimit -s) minus 640 KB.
- If the value of this parameter is greater than the value of ulimit -s minus 640 KB before the database is started, the database fails to be started. During database running, if the value of this parameter is greater than the value of ulimit -s minus 640 KB, this parameter does not take effect.
- Since not all operations can be detected, you are advised to set a specific value for this parameter.
- The default value is 2 MB, which is relatively small and does not easily cause system breakdown.
Risks and impacts of improper settings:
- If the value of ulimit -s minus 640 KB is less than the minimum value of this parameter, the database fails to be started.
- Setting this parameter to a value greater than the actual kernel limit means that a running recursive function may crash an individual backend process.
enable_early_free
Parameter description: Specifies whether the operator memory can be released in advance. This parameter can be set at the PDB level.
Parameter type: Boolean.
Unit: none
Value range:
- on: The operator memory can be released in advance.
- off: The operator memory cannot be released in advance.
Default value: on. In the PDB scenario, if this parameter is not set, the global setting is inherited.
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.
memory_trace_level
Parameter description: Specifies the control level for recording memory allocation information after the dynamic memory usage exceeds 90% of the maximum dynamic memory. This parameter takes effect only when the parameters use_workload_manager and enable_memory_limit are enabled.
Parameter type: enumerated type
Unit: none
Value range:
- none: indicates that memory allocation information is not recorded.
- level1: After the dynamic memory usage exceeds 90% of the maximum dynamic memory, the following memory information is recorded and saved in the $GAUSSLOG/mem_log directory:
- Global memory overview.
- Memory usage of the top 20 memory contexts of the instance, session, and thread types.
- The totalsize and freesize columns for each memory context.
- level2: After the dynamic memory usage exceeds 90% of the maximum dynamic memory, the following memory information is recorded and saved in the $GAUSSLOG/mem_log directory:
- Global memory overview.
- Memory usage of the top 20 memory contexts of the instance, session, and thread types.
- The totalsize and freesize columns for each memory context.
- Detailed information about all memory applications in each memory context, including the file where the allocated memory is located, line number, and size.
Default value: level1
Setting method: This is a SIGHUP parameter. Set it based on instructions provided in Table 1.
Setting suggestion: If this parameter is set to level2, the memory allocation details (file, line, and size) of each memory context are recorded, which greatly affects the performance. Therefore, exercise caution when setting this parameter.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.

- You can use the system function gs_get_history_memory_detail(cstring) to query the recorded memory snapshot information. For details about the function, see "SQL Reference" > "Functions and Operators" > "Statistics Functions" in Developer Guide.
- The recorded memory context is obtained after all memory contexts of the same type with the same name are summarized.
resilience_memory_reject_percent
Parameter description: Specifies the dynamic memory usage percentage for escape from memory overload. This parameter takes effect only when the parameters use_workload_manager and enable_memory_limit are enabled.
Parameter type: string.
Unit: none
Value range: a string of more than 0 characters.
This parameter consists of recover_memory_percent and overload_memory_percent. The meanings of the two parts are as follows:
- recover_memory_percent: Percentage of the dynamic memory usage when the memory recovers from overload to the maximum dynamic memory. When the dynamic memory usage is less than the maximum dynamic memory multiplied by the value of this parameter, the overload escape function is disabled and new connections are allowed. The value ranges from 0 to 100, which indicates a percentage.
- overload_memory_percent: Percentage of the dynamic memory usage to the maximum dynamic memory when the memory is overloaded. When the dynamic memory usage is greater than the maximum dynamic memory multiplied by the value of this parameter, the current memory is overloaded. In this case, the overload escape function is triggered to kill sessions and new connections are prohibited. The value ranges from 0 to 100, which indicates a percentage.
Default value: "0,0", indicating that the escape from memory overload function is disabled.
Setting method: This is a SIGHUP parameter. Set it based on instructions provided in Table 1.
The following command is an example. When the memory usage exceeds 90% of the upper limit, new connections are forbidden and stacked sessions are killed. When the memory usage is less than 70% of the upper limit, session killing is stopped and new connections are allowed.
gs_guc reload -Z datanode -N all -I all -c "resilience_memory_reject_percent='70,90'"
Setting suggestion: If the value is too small, the escape from memory overload process is frequently triggered. As a result, ongoing sessions are forcibly logged out, and new connections fail to be connected for a short period of time. Therefore, exercise caution when setting this parameter based on the actual memory usage.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.

- You can query the maximum dynamic memory and used dynamic memory in the gs_total_memory_detail view. max_dynamic_memory indicates the maximum dynamic memory, and dynamic_used_memory indicates the used dynamic memory.
- The values of recover_memory_percent and overload_memory_percent can be 0 at the same time. In addition, the value of recover_memory_percent must be smaller than that of overload_memory_percent. Otherwise, the setting does not take effect.
resilience_escape_user_permissions
Parameter description: Specifies the escape permission of users. You can set it for multiple users and separate users by commas (,). The value sysadmin indicates that jobs of the SYSADMIN user can be canceled by the escape function. The value monadmin indicates that jobs of the MONADMIN user can be canceled by the escape function. By default, this parameter is left blank, indicating that the escape function of the SYSADMIN and MONADMIN users is disabled. The value can only be sysadmin, monadmin, or an empty string.
Parameter type: string.
Unit: none
Value range: a string of more than 0 characters.
- sysadmin: Jobs of the SYSADMIN user can be canceled by the escape function.
- monadmin: Jobs of the MONADMIN user can be canceled by the escape function.
- "": The escape function of the SYSADMIN and MONADMIN users is disabled.
Default value: "", indicating that the escape function of the SYSADMIN and MONADMIN users is disabled.
Setting method: This is a SIGHUP parameter. Set it based on instructions provided in Table 1. You can set this parameter to multiple values separated by commas (,), for example, resilience_escape_user_permissions = 'sysadmin,monadmin'. You can also set this parameter to only one value, for example, resilience_escape_user_permissions = 'monadmin'.
For example, the following command is used to enable the escape function for both the SYSADMIN and MONADMIN users:
gs_guc reload -Z datanode -N all -I all -c "resilience_escape_user_permissions='sysadmin,monadmin'"
Setting suggestion: Retain the default value.
Risks and impacts of improper settings: Change the parameter value after fully understanding the parameter meaning and verifying it through testing.

- If this parameter is set for multiple times, the latest setting takes effect.
- If this parameter is set to any value in the value range, common users support the escape function.
- If a user has both the SYSADMIN and MONADMIN role permissions, the escape function of the user can be triggered only when resilience_escape_user_permissions is set to "sysadmin,monadmin".
local_plsqlcache_threshold
Parameter description: Specifies the maximum memory occupied by a stored procedure and plancache in the stored procedure in a session. When the actual system cache size exceeds this threshold, memory release is triggered. If this parameter is set to 0, the function does not take effect. When the max_execute_functions parameter is active, the system will still release memory used by stored procedures due to the setting of max_execute_functions, even if the cache usage has not reached the threshold set by local_plsqlcache_threshold. When local_plsqlcache_threshold is enabled, you are advised to set max_execute_functions to a large value to prevent conflicts between the two parameters. If both local_plsqlcache_threshold and max_execute_functions are set to small values, it may result in both limits not being reached, as the system will be unable to effectively control memory size or the number of executed functions. This parameter takes effect only when enable_global_plsqlcache is enabled.
Parameter type: integer.
Unit: byte
Value range: 0 to 9223372036854775807
Default value: 0
Setting method: This is a USERSET parameter. Set it based on instructions provided in Table 1.
Setting suggestion: Set this parameter according to the following formula: max_process_memory x sess_plsql_mem_percent/count_peak_conn. max_process_memory indicates the maximum memory occupied by the GaussDB instance, sess_plsql_mem_percent indicates the percentage of memory reserved for the session stored procedure, and count_peak_conn indicates the peak number of connections.
Risks and impacts of improper settings: If the value is too small, memory is frequently released, affecting performance. You are not advised to use this function when enable_global_plancache is enabled.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot