Memory
This section describes memory parameters.
These parameters, except local_syscache_threshold, take effect only after the database restarts.
memorypool_enable
Parameter description: Specifies whether to enable a memory pool.
This is a POSTMASTER parameter. Set it based on instructions in Table 1.
Value range: Boolean
- on indicates that the memory pool is enabled.
- off indicates that the memory pool is disabled.
Default value: off
memorypool_size
Parameter description: Specifies the memory pool size.
This is a POSTMASTER parameter. Set it based on instructions in Table 1.
Value range: an integer ranging from 128 x 1024 to 1073741823. The unit is KB.
Default value: 512MB
enable_memory_limit
Parameter description: Specifies whether to enable the logical memory management module.
This is a POSTMASTER parameter. Set it based on instructions in Table 1.
Value range: Boolean
- on indicates that the logical memory management module is enabled.
- off indicates that the logical memory management module is disabled.
Default value: on
- Fixed overhead exists, that is, shared_buffers and metadata (about 200 MB). If max_process_memory minus the fixed overhead is less than 2 GB, GaussDB forcibly sets enable_memory_limit to off. Metadata is the memory used within GaussDB and is related to some concurrent parameters, such as max_connections, thread_pool_attr and max_prepared_transactions.
- If this parameter is set to off, the memory used by the database is not limited. When a large number of concurrent or complex queries are performed, too much memory is used, which may cause OS OOM problems.
max_process_memory
Parameter description: Specifies the maximum physical memory of a database node.
Parameter type: integer.
Unit: KB
Value range: 2097152 to 2147483647
Default value:
Independent deployment: 360GB (60-core CPU/480 GB memory); 192GB (32-core CPU/256 GB memory); 96GB (16-core CPU/128 GB memory); 40GB (8-core CPU/64 GB memory); 20GB (4-core CPU/32 GB memory); 10GB (4-core CPU/16 GB memory)
Finance edition (standard):
CN: 300GB (196-core CPU/1536 GB memory); 200GB (128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, and 96-core CPU/1024 GB memory); 160GB (96-core CPU/768 GB memory); 130GB (80-core CPU/640 GB memory); 120GB (72-core CPU/576 GB memory); 100GB (64-core CPU/512 GB memory and 60-core CPU/480 GB memory); 50GB (32-core CPU/256 GB memory); 20GB (16-core CPU/128 GB memory); 10GB (8-core CPU/64 GB memory)
DN: 550GB (196-core CPU/1536 GB memory); 350GB (128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, and 96-core CPU/1024 GB memory); 260GB (96-core CPU/768 GB memory); 220GB (80-core CPU/640 GB memory); 200GB (72-core CPU/576 GB memory); 180GB (64-core CPU/512 GB memory); 160GB (60-core CPU/480 GB memory); 80GB (32-core CPU/256 GB memory); 40GB (16-core CPU/128 GB memory); 20GB (8-core CPU/64 GB memory)
Enterprise edition:
CN: 200GB (196-core CPU/1536 GB memory); 150GB (128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, and 96-core CPU/1024 GB memory); 110GB (96-core CPU/768 GB memory); 90GB (80-core CPU/640 GB memory); 80GB (72-core CPU/576 GB memory); 75GB (80-core CPU/512 GB memory and 64-core CPU/512 GB memory); 70GB (60-core CPU/480 GB memory); 35GB (32-core CPU/256 GB memory); 15GB (16-core CPU/128 GB memory); 9GB (8-core CPU/64 GB memory)
DN: 400GB (196-core CPU/1536 GB memory); 250GB (128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, and 96-core CPU/1024 GB memory); 190GB (96-core CPU/768 GB memory); 160GB (80-core CPU/640 GB memory); 140GB (72-core CPU/576 GB memory); 125GB (80-core CPU/512 GB memory and 64-core CPU/512 GB memory); 120GB (60-core CPU/480 GB memory); 60GB (32-core CPU/256 GB memory); 25GB (16-core CPU/128 GB memory); 15GB (8-core CPU/64 GB memory)
Finance edition (data computing):
CN: 160GB (196-core CPU/1536 GB memory); 120GB (128-core CPU/1024 GB memory); 100GB (96-core CPU/768 GB memory); 60GB (72-core CPU/576 GB memory, 64-core CPU/512 GB memory); 20GB (32-core CPU/256 GB memory)
DN: 300GB (196-core CPU/1536 GB memory); 200GB (128-core CPU/1024 GB memory); 150GB (96-core CPU/768 GB memory); 110GB (72-core CPU/576 GB memory); 100GB (64-core CPU/512 GB memory); 40GB (32-core CPU/256 GB memory)
If this parameter is set to a value greater than the physical memory of the server, the OS OOM problem may occur.
Setting method: This is a POSTMASTER parameter. Set it based on instructions in Table 1.
Setting suggestion: This parameter is used to prevent node OOM caused by memory bloat, ensuring system reliability. On DNs, the value of this parameter depends on the physical memory of the system and the number of primary DNs deployed on a single server. The recommended formula is as follows: (Physical memory – vm.min_free_kbytes) x 0.7/(n + Number of DNs). vm.min_free_kbytes in this formula indicates that the OS memory reserved for the kernel to receive and send data. Its value is at least 5% of the total memory. That is, max_process_memory = Physical memory size x 0.665/(n + Number of primary DNs).max_process_memory is calculated based on the following formula: max_process_memory = Physical memory size x 0.665/(n + Number of primary DNs) When the number of nodes in the cluster is less than or equal to 256, n is 1. When the number of nodes in the cluster is greater than 256 and less than 512, n is 2. When the number of nodes in the cluster is greater than 512, n is 3. When DNs are deployed independently, n is 0. You can set this parameter on CNs to the same value as that on DNs.
local_syscache_threshold
Parameter description: Specifies the size of system catalog cache in a session. If enable_global_plancache is enabled, local_syscache_threshold does not take effect when it is set to a value less than 16 MB to ensure that GPC takes effect. The minimum value is 16 MB. If enable_global_syscache and enable_thread_pool are enabled, this parameter indicates the total cache size of the current thread and sessions bound to the current thread.
Parameter type: integer.
Unit: kB
Value range:
- Method 1: Set this parameter to an integer without a unit. The integer ranges from 1 x 1024 to 512 x 1024. You are advised to set this parameter to an integer multiple of 1024. For example, the value 2048 indicates 2048 KB.
- Method 2: Set this parameter to a value with a unit. The value ranges from 1 x 1024 KB to 512 x 1024 KB. For example, the value 32MB indicates 32 MB. The unit can only be kB, MB, or GB.
Default value:
- Independent deployment: 16MB
- Finance edition (standard):
32MB (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, 96-core CPU/1024 GB memory, 96-core CPU/768 GB memory, and 80-core CPU/640 GB memory); 16MB (72-core CPU/576 GB memory, 64-core CPU/512 GB memory, 60-core CPU/480 GB memory, 32-core CPU/256 GB memory, 16-core CPU/128 GB memory, and 8-core CPU/64 GB memory)
- Enterprise edition:
32MB (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, 96-core CPU/1024 GB memory, 96-core CPU/768 GB memory, 80-core CPU/640 GB memory, 80-core CPU/512 GB memory, 72-core CPU/576 GB memory, and 64-core CPU/512 GB memory); 16MB (60-core CPU/480 GB memory, 32-core CPU/256 GB memory, 16-core CPU/128 GB memory, and 8-core CPU/64 GB memory)
- Finance edition (data computing): 16 MB
Setting method: This is a SIGHUP parameter. Set it based on instructions in Table 1.
Setting suggestion: Retain the default value.
enable_memory_context_control
Parameter description: Enables the function of checking whether the number of memory contexts exceeds the specified limit. This parameter applies only to the DEBUG version.
This is a SIGHUP parameter. Set it based on instructions in Table 1.
Value range: Boolean
- on indicates that the function of checking the number of memory contexts is enabled.
- off indicates that the function of checking the number of memory contexts is disabled.
Default value: off
uncontrolled_memory_context
Parameter description: Specifies which memory context will not be checked when the function of checking whether the number of memory contexts exceeds the specified limit is enabled. This parameter applies only to the DEBUG version.
This is a USERSET parameter. Set it based on instructions in Table 1.
During the query, the title meaning string "MmgrMemoryController white list:" is added to the beginning of the parameter value.
Value range: a string
Default value: empty
shared_buffers
Parameter description: Specifies the size of shared memory used by GaussDB. Increasing the value of this parameter causes GaussDB to request more System V shared memory than the default configuration allows.
Parameter type: integer.
Unit: page (8 kB)
Value range: 16 to 1073741823. The value of this parameter must be an integer multiple of BLCKSZ. Currently, BLCKSZ is set to 8kB. That is, the value of this parameter must be an integer multiple of 8 KB.
Default value:
Independent deployment:
CN: 4GB (60-core CPU/480 GB memory); 2GB (32-core CPU/256 GB memory, 16-core CPU/128 GB memory); 1GB (8-core CPU/64 GB memory); 512MB (4-core CPU/32 GB memory) 256MB (4-core CPU/16 GB memory)
DN: 140GB (60-core CPU/480 GB memory); 76GB (32-core CPU/256 GB memory); 40GB (16-core CPU/128 GB memory); 16GB (8-core CPU/64 GB memory); 8GB (4-core CPU/32 GB memory); 4GB (4-core CPU/16 GB memory)
Finance edition (standard):
CN: 2GB (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, 96-core CPU/1024 GB memory, 96-core CPU/768 GB memory, 80-core CPU/640 GB memory, 72-core CPU/576 GB memory, 64-core CPU/512 GB memory, and 60-core CPU/480 GB memory); 1GB (32-core CPU/256 GB memory and 16-core CPU/128 GB memory); 512MB (8-core CPU/64 GB memory)
DN: 220GB (196-core CPU/1536 GB memory); 140GB (128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, and 96-core CPU/1024 GB memory); 100GB (96-core CPU/768 GB memory); 80GB (80-core CPU/640 GB memory and 72-core CPU/576 GB memory); 70GB (64-core CPU/512 GB memory); 60GB (60-core CPU/480 GB memory); 30GB (32-core CPU/256 GB memory); 16GB (16-core CPU/128 GB memory); 8GB (8-core CPU/64 GB memory)
Enterprise edition:
CN: 2GB (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, 96-core CPU/1024 GB memory, 96-core CPU/768 GB memory, 80-core CPU/640 GB memory, 80-core CPU/512 GB memory, 72-core CPU/576 GB memory, 64-core CPU/512 GB memory, and 60-core CPU/480 GB memory); 1GB (32-core CPU/256 GB memory and 16-core CPU/128 GB memory); 512MB (8-core CPU/64 GB memory)
DN: 160GB (196-core CPU/1536 GB memory); 100GB (128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, and 96-core CPU/1024 GB memory); 76GB (96-core CPU/768 GB memory); 64GB (80-core CPU/640 GB memory); 56GB (72-core CPU/576 GB memory); 50GB (80-core CPU/512 GB memory and 64-core CPU/512 GB memory); 48GB (60-core CPU/480 GB memory); 24GB (32-core CPU/256 GB memory); 10GB (16-core CPU/128 GB memory); 6GB (8-core CPU/64 GB memory)
Finance edition (data computing):
CN: 2GB (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, 96-core CPU/768 GB memory); 1GB (72-core CPU/576 GB memory, 64-core CPU/512 GB memory); 512MB (32-core CPU/256 GB memory)
DN: 120GB (196-core CPU/1536 GB memory); 80GB (128-core CPU/1024 GB memory); 50GB (96-core CPU/768 GB memory); 40GB (72-core CPU/576 GB memory); 30GB (64-core CPU/512 GB memory); 10GB (32-core CPU/256 GB memory)
Setting method: This is a POSTMASTER parameter. Set it based on instructions in Table 1.
Setting suggestion:
- Set this parameter on DNs to a value greater than that on CNs because most queries in GaussDB are pushed down.
- Set shared_buffers to a value less than 40% of the memory.
- If shared_buffers is set to a larger value, increase the value of checkpoint_segments because a longer period of time is required to write a large amount of new or changed data.
- If the process fails to be restarted after the value of shared_buffers is changed, perform either of the following operations based on the error information:
- Adjust the kernel.shmall, kernel.shmmax, and kernel.shmmin OS parameters. For details, see "Preparing for Installation > Modifying OS Configuration > Configuring Other OS Parameters" in Installation Guide.
- Run the free -g command to check whether the available memory and swap space of the OS are sufficient. If the memory is insufficient, manually stop other user programs that occupy much memory.
- Set this parameter to the recommended default value for different specifications. Otherwise, the value of shared_buffers may be too large or too small. The following condition must be met: data_replicate_buffer_size + segment_buffers + shared_buffers + wal_buffers + temp_buffers + maintenance_work_mem + work_mem + query_mem + (Standby node) wal_receiver_buffer_size < max_process_memory < Memory size of the physical machine. If the value of the memory parameter is too large and exceeds the upper limit of the physical memory, the database cannot be started because the memory allocated to the database is insufficient.
page_version_check
Parameter description: Specifies whether to perform verification for underlying storage faults and pages not marked as dirty based on page version information. page_version_check is a level-3 switch. The verification for underlying storage faults is to check whether a page read from the underlying storage is of a correct version, which prevents loss of page version information caused by a fault such as a disk power failure. The verification for pages not marked as dirty is to check whether modified pages are not marked as dirty, and is controlled by an independent switch page_missing_dirty_check.
Value type: enumerated type.
Unit: none
Value range:
- off: The verification for underlying storage faults and pages not marked as dirty is disabled.
- memory: The page version verification function (that is, verification for underlying storage faults and pages not marked as dirty) in pure memory mode is enabled. The page version information is cached only in the memory and will be lost after a restart.
- persistence: The persistent page version verification function (that is, verification for underlying storage faults and pages not marked as dirty) is enabled. The page version information is persisted to files and will not be lost after a restart.
Default value: memory
Setting method: This is a POSTMASTER parameter. Set it based on instructions in Table 1.
Setting suggestions: Set the value of this parameter based on different specifications, that is, off (four-core CPU/16 GB memory, four-core CPU/32 GB memory, and eight-core CPU/64 GB memory) or memory (16-core CPU/128 GB memory, 32-core CPU/256 GB memory, 60-core CPU/480 GB memory, 64-core CPU/512 GB memory, 72-core CPU/576 GB memory, 80-core CPU/640 GB memory, and 96-core CPU/768 GB memory, 96-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, 128-core CPU/1024 GB memory, and 196-core CPU/1536 GB memory). Setting this parameter to memory affects the performance of a device, and the one with smaller specifications suffers more. (For example, the performance of a 16-core CPU/128 GB memory device using TPC-C model is about 7%.) If the system needs to restart frequently, you are advised to set this parameter to persistence to ensure that the version information on the page is not lost. However, the performance will be affected.
page_missing_dirty_check
Parameter description: Checks whether the modified pages are not marked as dirty. page_missing_dirty_check is controlled by page_version_check. If page_version_check is set to off, setting page_missing_dirty_check to on does not take effect.
Parameter type: Boolean.
Unit: none
Value range:
- on: The verification for pages not marked as dirty is performed.
- off: The verification for pages not marked as dirty is not performed.
Default value: off
Setting method: This is a POSTMASTER parameter. Set it based on instructions in Table 1.
Setting suggestion: You are advised to enable this function in test scenarios to detect as many pages not marked as dirty that lead to code bugs in non-production environments as possible. On the live network, this function is disabled by default to avoid extra overhead and performance deterioration.
page_version_max_num
Parameter description: Specifies the maximum number of page versions that can be cached in the memory. This parameter is valid only when page_version_check is not set to off. The value of the paramater must be twice to four times the value of shared_buffers. Each page_version occupies 36 bytes of memory. Pay attention to the memory usage.
Parameter type: integer.
Unit: none
Value range: 0 to 2147483647.
- 0: When page_version_check is not set to OFF, the value of page_version_max_num is automatically calculated based on the value of shared_buffers using the following formula: shared_buffers x 2. For example, 32 MB of shared_buffers corresponds to 4096 buffers. Therefore, the value of this parameter is set to 8192.
- Non-zero values: The manually configured value is forcibly used.
- If page_version_check is not set to OFF, the value cannot be less than 16 times of page_version_partitions. Otherwise, it is forcibly set to a value calculated based on the following formula: page_version_partitions x 16.
Default value: 0
Setting method: This is a POSTMASTER parameter. Set it based on instructions in Table 1.
Setting suggestion: If high performance is required and the memory is sufficient, you are advised to manually set this parameter to four times the value of shared_buffers and the ratio of this parameter to page_version_partitions is [256, 1024].
page_version_partitions
Parameter description: Specifies the number of hash table partitions in cached page version information in the memory. This parameter directly affects the hash query efficiency and hash conflict probability.
Parameter type: integer.
Unit: none
Value range: 0–2097152.
- 0: If page_version_check is not set to OFF, the value of page_version_partitions is automatically calculated based on the value of page_version_max_num using the following formula: page_version_max_num/512. If the automatically calculated value is smaller than 4, the parameter is forcibly set to 4.
- Non-zero values: The manually configured value is forcibly used. If page_version_check is not set to OFF, the minimum value is 4. If the value is less than 4, the parameter is forcibly set to 4.
Default value: 0
Setting method: This is a POSTMASTER parameter. Set it based on instructions in Table 1.
Setting suggestion: If you have high performance requirements, you are advised to manually set this parameter to a value 1/256 to 1/1024 of the value of page_version_max_num.
page_version_recycler_thread_num
Parameter description: Specifies the number of threads for recycling and verifying page version information. This parameter is valid only when page_version_check is not set to off.
Parameter type: integer.
Unit: none
Value range: 0–16
- If page_version_check is set to memory:
- 0: The value is automatically calculated based on the value of page_version_partitions using the following formula: page_version_recycler_thread_num = page_version_partitions/16384. If the automatically calculated value is greater than 4, the parameter is forcibly set to 4.
- Non-zero values: The manually configured value is forcibly used.
- The parameter cannot be set to a value greater than that of page_version_partitions. Otherwise, it is forcibly set to the value of page_version_partitions.
- If page_version_check is set to persistence:
If the value is less than 2, set this parameter to 2. If the value is greater than or equal to 2, the manually configured parameter value is forcibly used.
Default value: 0
Setting method: This is a POSTMASTER parameter. Set it based on instructions in Table 1.
Setting suggestion: Retain the default value 0.
verify_log_buffers
Parameter description: Specifies the size of the verifyLog buffer. This parameter is valid only when page_version_check is set to persistence. The verifyLog buffer memory is managed by page, and each page is 8 KB.
Parameter type: integer.
Unit: page (8 KB)
Value range: 4-262144
Default value: 4 (32 KB)
Setting method: This is a POSTMASTER parameter. Set it based on instructions in Table 1. For example, if verify_log_buffers is set to 131072, the size of the verifyLog buffer is 1 GB, that is, 131072 multiplied by 8 KB; if verify_log_buffers is set to 131072KB, the size of the verifyLog buffer is 131072 KB. If the value contains a unit, the value must be kB, MB, or GB and must be an integer multiple of 8 KB.
Setting suggestion: Set this parameter based on the system hardware specifications.
1GB (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, 96-core CPU/1024 GB memory, 96-core CPU/768 GB memory, 80-core CPU/640 GB memory, 64-core CPU/512 GB memory, 60-core CPU/480 GB memory, 32-core CPU/256 GB memory); 512MB (16-core CPU/128 GB memory); 256MB (8-core CPU/64 GB memory); 128MB (4-core CPU/32 GB memory); 16MB (4-core CPU/16 GB memory)
segment_buffers
Parameter description: Specifies the memory size of a GaussDB segment-page metadata page.
Parameter type: integer.
Unit: KB
Value range: 16 to 1073741823. The value of this parameter must be an integer multiple of BLCKSZ. Currently, BLCKSZ is set to 8kB. That is, the value of this parameter must be an integer multiple of 8 KB.
Default value: 8MB
Setting method: This is a POSTMASTER parameter. Set it based on instructions in Table 1.
Setting suggestions: segment_buffers is used to cache the content of segment-page headers, which is key metadata information. To improve performance, it is recommended that the segment headers of ordinary tables be cached in the buffer and not be replaced. You are advised to set this parameter based on the following formula: Number of tables (including indexes and TOAST tables) x Number of partitions x 3 + 128. Multiplying by 3 is because each table (partition) has some extra metadata segments. Generally, a table has three segments. Adding 128 at last is because segment-page tablespace management requires a certain number of buffers. If this parameter is set to a small value, it takes a long time to create a segment-page table for the first time. Therefore, you are advised to retain the default value to avoid setting segment_buffers to an excessively large or small value. The following condition must be met: data_replicate_buffer_size + segment_buffers + shared_buffers + wal_buffers + temp_buffers + maintenance_work_mem + work_mem + query_mem + (Standby node) wal_receiver_buffer_size < max_process_memory < Memory size of the physical machine. If the value of the memory parameter is too large and exceeds the upper limit of the physical memory, the database cannot be started because the memory allocated to the database is insufficient.
bulk_write_ring_size
Parameter description: Specifies the size of a ring buffer used for parallel data import.
This is a USERSET parameter. Set it based on instructions in Table 1.
Value range: an integer ranging from 16384 to 2147483647. The unit is KB.
Default value: 2GB
Setting suggestion: Increase the value of this parameter on DNs if a huge amount of data will be imported.
standby_shared_buffers_fraction
Parameter description: Specifies the shared_buffers proportion used on the server where a standby instance is deployed.
This is a SIGHUP parameter. Set it based on instructions in Table 1.
Value range: a double-precision floating-point number ranging from 0.1 to 1.0
Default value: 1
temp_buffers
Parameter description: Specifies the maximum size of local temporary buffers used by a database session.
This is a USERSET parameter. Set it based on instructions in Table 1.
temp_buffers can be modified only before the first use of temporary tables within each session. Subsequent attempts to change the value of this parameter will not take effect on that session.
A session allocates temporary buffers based on the value of temp_buffers. If a large value is set in a session that does not require many temporary buffers, only the overhead of one buffer descriptor is added. If a buffer is used, additional 8192 bytes will be consumed for it.
Value range: an integer ranging from 100 to 1073741823. The unit is 8 KB.
Default value: 1MB
max_prepared_transactions
Parameter description: Sets the maximum number of transactions that can be in the "prepared" state simultaneously. Increasing the value of this parameter causes GaussDB to request more System V shared memory than the default configuration allows.
When GaussDB is deployed as an HA system, set this parameter on standby nodes to a value greater than or equal to that on primary nodes. Otherwise, queries will fail on the standby nodes.
Parameter type: integer.
Unit: none
Value range: 0 to 262143
Default value:
- Independent deployment:
1200 (60-core CPU/480 GB memory and 32-core CPU/256 GB memory); 800 (16-core CPU/128 GB memory); 400 (8-core CPU/64 GB memory); 300 (4-core CPU/32 GB memory); 200 (4-core CPU/16 GB memory)
- Finance edition (standard):
CN: 1200 (196-core CPU/1536 GB memory); 900 (128-core CPU/1024 GB memory and 104-core CPU/1024 GB memory); 800 (96-core CPU/1024 GB memory, 96-core CPU/768 GB memory, 80-core CPU/640 GB memory, 72-core CPU/576 GB memory, 64-core CPU/512 GB memory, and 60-core CPU/480 GB memory); 400 (32-core CPU/256 GB memory and 16-core CPU/128 GB memory); 200 (8-core CPU/64 GB memory)
DN: 4200 (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, 96-core CPU/1024 GB memory, 96-core CPU/768 GB memory, 80-core CPU/640 GB memory, 72-core CPU/576 GB memory, 64-core CPU/512 GB memory, and 60-core CPU/480 GB memory); 2200 (32-core CPU/256 GB memory); 1200 (16-core CPU/128 GB memory); 800 (8-core CPU/64 GB memory)
- Enterprise edition:
CN: 1200 (196-core CPU/1536 GB memory); 900 (128-core CPU/1024 GB memory and 104-core CPU/1024 GB memory); 800 (96-core CPU/1024 GB memory, 96-core CPU/768 GB memory, 80-core CPU/640 GB memory, 80-core CPU/512 GB memory, 72-core CPU/576 GB memory, 64-core CPU/512 GB memory, and 60-core CPU/480 GB memory); 400 (32-core CPU/256 GB memory and 16-core CPU/128 GB memory); 200 (8-core CPU/64 GB memory)
DN: 1800 (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, and 104-core CPU/1024 GB memory); 1200 (96-core CPU/1024 GB memory, 96-core CPU/768 GB memory, 80-core CPU/640 GB memory, 80-core CPU/512 GB memory, 72-core CPU/576 GB memory, 64-core CPU/512 GB memory, and 60-core CPU/480 GB memory); 800 (32-core CPU/256 GB memory); 400 (16-core CPU/128 GB memory and 8-core CPU/64 GB memory)
- Finance edition (data computing):
CN: 1200 (196-core CPU/1536 GB memory); 800 (128-core CPU/1024 GB memory, 96-core CPU/768 GB memory); 400 (72-core CPU/576 GB memory, 64-core CPU/512 GB memory); 200 (32-core CPU/256 GB memory)
DN: 2400 (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, and 96-core CPU/768 GB memory); 1200 (72-core CPU/576 GB memory); 800 (64-core CPU/512 GB memory); 400 (32-core CPU/256 GB memory)
Setting method: This is a POSTMASTER parameter. Set it based on instructions in Table 1.
Setting suggestions: The default value is recommended. You need to adjust the value only when a two-phase transaction reports an error indicating insufficient slots. To avoid failures in the preparation step, the value of this parameter must be greater than the number of worker threads in thread_pool_attr in thread pool mode. In non-thread pool mode, the value of this parameter must be greater than or equal to the value of max_connections.
work_mem
Parameter description: Specifies the amount of memory to be used by internal sort operations and hash tables before they write data into temporary disk files. Sorts are required for ORDER BY, DISTINCT, and merge joins. Hash tables are used in hash joins, hash-based aggregation, and hash-based processing of IN subqueries.
In a complex query, several sort or hash operations may run in parallel; each operation will be allowed to use as much memory as this parameter specifies. If the memory is insufficient, data will be written into temporary files. In addition, several running sessions could be performing such operations concurrently. Therefore, the total memory used may be many times the value of work_mem.
This is a USERSET parameter. Set it based on instructions in Table 1.
Value range: an integer ranging from 64 to 2147483647. The unit is KB.
Default value:
- Independent deployment:
128MB (60-core CPU/480 GB memory, 32-core CPU/256 GB memory, and 16-core CPU/128 GB memory); 64MB (8-core CPU/64 GB memory); 32MB (4-core CPU/32 GB memory); 16MB (4-core CPU/16 GB memory)
- Finance edition (standard):
CN: 128MB (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, 96-core CPU/1024 GB memory, 96-core CPU/768 GB memory, 80-core CPU/640 GB memory, 72-core CPU/576 GB memory, 64-core CPU/512 GB memory, 60-core CPU/480 GB memory, 32-core CPU/256 GB memory, and 16-core CPU/128 GB memory); 64MB (8-core CPU/64 GB memory)
DN: 256MB (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, 96-core CPU/1024 GB memory, and 96-core CPU/768 GB memory); 128MB (80-core CPU/640 GB memory, 72-core CPU/576 GB memory, 64-core CPU/512 GB memory, 60-core CPU/480 GB memory, 32-core CPU/256 GB memory, and 16-core CPU/128 GB memory); 64MB (8-core CPU/64 GB memory)
- Enterprise edition:
128MB (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, 96-core CPU/1024 GB memory, 96-core CPU/768 GB memory, 80-core CPU/640 GB memory, 80-core CPU/512 GB memory, 72-core CPU/576 GB memory, 64-core CPU/512 GB memory, 60-core CPU/480 GB memory, 32-core CPU/256 GB memory, and 16-core CPU/128 GB memory); 64MB (8-core CPU/64 GB memory)
- Finance edition (data computing):
128MB (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, 96-core CPU/768 GB memory, 72-core CPU/576 GB memory, and 64-core CPU/512 GB memory); 64MB (32-core CPU/256 GB memory)
Setting suggestion: If the physical memory specified by work_mem is insufficient, additional operator calculation data will be written into temporary tables based on query characteristics and the degree of parallelism. This reduces performance by five to ten times, and prolongs the query response time from seconds to minutes.
- For complex serial queries, each query requires five to ten associated operations. Set work_mem using the following formula: work_mem = 50% of the memory/10.
- For simple serial queries, each query requires two to five associated operations. Set work_mem using the following formula: work_mem = 50% of the memory/5.
- For concurrent queries, set work_mem using the following formula: work_mem = work_mem for serial queries/Number of concurrent SQL statements.
- BitmapScan hash tables are also restricted by work_mem, but will not be forcibly flushed to disks. In the case of complete lossify, every 1-MB memory occupied by the hash table corresponds to a 16 GB page of BitmapHeapScan. After the upper limit of work_mem is reached, the memory increases linearly with the data access traffic based on this ratio.
query_mem
Parameter description: Specifies the memory used by a query.
This is a USERSET parameter. Set it based on instructions in Table 1.
Value range: 0 or an integer greater than 32 MB. The default unit is KB.
Default value: 0
- If the value of query_mem is greater than 0, the optimizer adjusts the memory cost estimate to this value when generating an execution plan.
- If the value is set to a negative value or a positive integer less than 32 MB, the default value 0 is used. In this case, the optimizer does not adjust the estimated query memory.
query_max_mem
Parameter description: Specifies the maximum memory that can be used by a query.
This is a USERSET parameter. Set it based on instructions in Table 1.
Value range: 0 or an integer greater than 32 MB. The default unit is KB.
Default value: 0
- If the value of query_max_mem is greater than 0, an error is reported when the query memory usage exceeds the value.
- If the value is set to a negative value or a positive integer less than 32 MB, the default value 0 is used. In this case, the optimizer does not limit the query memory.
maintenance_work_mem
Parameter description: Specifies the maximum amount of memory to be used by maintenance operations, such as VACUUM and CREATE INDEX. This parameter may affect the execution efficiency of VACUUM, VACUUM FULL, CLUSTER, and CREATE INDEX.
This is a USERSET parameter. Set it based on instructions in Table 1.
Value range: an integer ranging from 1024 to 2147483647. The unit is KB.
Default value:
- Independent deployment:
CN: 1GB (60-core CPU/480 GB memory); 512MB (32-core CPU/256 GB memory); 256MB (16-core CPU/128 GB memory); 128MB (8-core CPU/64 GB memory); 64MB (4-core CPU/32 GB memory); 32MB (4-core CPU/16 GB memory)
DN: 2GB (60-core CPU/480 GB memory); 1GB (32-core CPU/256 GB memory); 512MB (16-core CPU/128 GB memory); 256MB (8-core CPU/64 GB memory); 128MB (4-core CPU/32 GB memory); 64MB (4-core CPU/16 GB memory)
- Finance edition (standard):
CN: 1GB (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, 96-core CPU/1024 GB memory, 96-core CPU/768 GB memory, and 80-core CPU/640 GB memory); 512MB (72-core CPU/576 GB memory and 64-core CPU/512 GB memory); 256MB (60-core CPU/480 GB memory, 32-core CPU/256 GB memory, 16-core CPU/128 GB memory, and 8-core CPU/64 GB memory)
DN: 2GB (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, 96-core CPU/1024 GB memory, 96-core CPU/768 GB memory, 80-core CPU/640 GB memory, 72-core CPU/576 GB memory, 64-core CPU/512 GB memory, and 60-core CPU/480 GB memory); 1GB (32-core CPU/256 GB memory); 512MB (16-core CPU/128 GB memory); 256MB (8-core CPU/64 GB memory)
- Enterprise edition:
CN: 1GB (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, 96-core CPU/1024 GB memory, 96-core CPU/768 GB memory, 80-core CPU/640 GB memory, and 80-core CPU/512 GB memory); 512MB (72-core CPU/576 GB memory and 64-core CPU/512 GB memory); 256MB (60-core CPU/480 GB memory, 32-core CPU/256 GB memory, 16-core CPU/128 GB memory, and 8-core CPU/64 GB memory)
DN: 2GB (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, 104-core CPU/1024 GB memory, 96-core CPU/1024 GB memory, 96-core CPU/768 GB memory, 80-core CPU/640 GB memory, 80-core CPU/512 GB memory, 72-core CPU/576 GB memory, 64-core CPU/512 GB memory, and 60-core CPU/480 GB memory); 1GB (32-core CPU/256 GB memory); 512MB (16-core CPU/128 GB memory); 256MB (8-core CPU/64 GB memory)
- Finance edition (data computing):
CN: 1GB (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, 96-core CPU/768 GB memory); 256MB (72-core CPU/576 GB memory, 64-core CPU/512 GB memory); 128MB (32-core CPU/256 GB memory)
DN: 2GB (196-core CPU/1536 GB memory, 128-core CPU/1024 GB memory, 96-core CPU/768 GB memory); 1GB (72-core CPU/576 GB memory, 64-core CPU/512 GB memory); 512MB (32-core CPU/256 GB memory)
Setting suggestion:
- The value of this parameter must be greater than that of work_mem so that database dumps can be more quickly cleared or restored. In a database session, only one maintenance operation can be performed at a time. Maintenance is usually performed when there are not many running sessions.
- When the Autovacuum process is running, up to autovacuum_max_workers times this memory may be allocated. In this case, set maintenance_work_mem to a value greater than or equal to that of work_mem.
- If a large amount of data is to be clustered, increase the value of this parameter in the session.
max_stack_depth
Parameter description: Specifies the maximum safe depth of the GaussDB execution stack. The safety margin is required because the stack depth is not checked in every routine in the server, but only in key potentially-recursive routines, such as expression evaluation.
Parameter type: integer.
Unit: KB
Value range: 100 to 2147483647
Default value:
- If the value of ulimit -s minus 640 KB is greater than or equal to 2 MB, the default value of this parameter is 2 MB.
- If the value of ulimit -s minus 640 KB is less than 2 MB, the default value of this parameter is the value of ulimit -s minus 640 KB.
Setting method: This is a SUSET parameter. Set it based on instructions in Table 1.
Setting suggestion:
- The database needs to reserve 640 KB stack depth. Therefore, the maximum value of this parameter is the actual stack size limit enforced by the OS kernel (as set by ulimit -s) minus 640 KB.
- If the value of this parameter is greater than the value of ulimit -s minus 640 KB before the database is started, the database fails to be started. During database running, if the value of this parameter is greater than the value of ulimit -s minus 640 KB, this parameter does not take effect.
- If the value of ulimit -s minus 640 KB is less than the minimum value of this parameter, the database fails to be started.
- Setting this parameter to a value greater than the actual kernel limit means that a running recursive function may crash an individual backend process.
- Since not all OSs provide this function, you are advised to set a specific value for this parameter.
- The default value is 2 MB, which is relatively small and does not easily cause system breakdown.
bulk_read_ring_size
Parameter description: Specifies the ring buffer size used for parallel data export.
This is a USERSET parameter. Set it based on instructions in Table 1.
Value range: an integer ranging from 256 to 2147483647. The unit is KB.
Default value: 16MB
enable_early_free
Parameter description: Specifies whether the operator memory can be released in advance.
Parameter type: Boolean.
Unit: none
Value range:
- on indicates that the operator memory can be released in advance.
- off indicates that the operator memory cannot be released in advance.
Default value: on
Setting method: This is a USERSET parameter. Set it based on instructions in Table 1.
Setting suggestion: Retain the default value.
memory_trace_level
Parameter description: Specifies the control level for recording memory allocation information after the dynamic memory usage exceeds 90% of the maximum dynamic memory. This parameter takes effect only when the GUC parameters use_workload_manager and enable_memory_limit are enabled. This is a SIGHUP parameter. Set it based on instructions in Table 1.
Value range: enumerated values
- none: indicates that memory application information is not recorded.
- level1: After the dynamic memory usage exceeds 90% of the maximum dynamic memory, the following memory information is recorded and saved in the $GAUSSLOG/mem_log directory:
- Global memory overview.
- Memory usage of the top 20 memory contexts of the instance, session, and thread types.
- The totalsize and freesize columns for each memory context.
- level2: After the dynamic memory usage exceeds 90% of the maximum dynamic memory, the following memory information is recorded and saved in the $GAUSSLOG/mem_log directory:
- Global memory overview.
- Memory usage of the top 20 memory contexts of the instance, session, and thread types.
- The totalsize and freesize columns for each memory context.
- Detailed information about all memory applications in each memory context, including the file where the allocated memory is located, line number, and size.
Default value: level1
- If this parameter is set to level2, the memory allocation details (file, line, and size) of each memory context are recorded, which greatly affects the performance. Therefore, exercise caution when setting this parameter.
- You can use the system function gs_get_history_memory_detail(cstring) to query the recorded memory snapshot information. For details about the function, see "SQL Reference > Functions and Operators > Statistics Functions" in Developer Guide.
- If the use_workload_manager parameter is disabled and the bypass_workload_manager parameter is enabled, this parameter also takes effect. The bypass_workload_manager parameter is of the SIGHUP type; therefore, after the reload mode is set, you need to restart the database for the setting to take effect.
- The recorded memory context is obtained after all memory contexts of the same type with the same name are summarized.
resilience_memory_reject_percent
Parameter description: Specifies the dynamic memory usage percentage for escape from memory overload. This parameter takes effect only when the GUC parameters use_workload_manager and enable_memory_limit are enabled. This is a SIGHUP parameter. Set it based on instructions in Table 1.
Value range: a string, consisting of one or more characters.
This parameter consists of recover_memory_percent and overload_memory_percent.
- recover_memory_percent: Percentage of the dynamic memory usage when the memory recovers from overload to the maximum dynamic memory. When the dynamic memory usage is less than the maximum dynamic memory multiplied by the value of this parameter, the overload escape function is disabled and new connections are allowed. The value ranges from 0 to 100. The value indicates a percentage.
- overload_memory_percent: Percentage of the dynamic memory usage to the maximum dynamic memory when the memory is overloaded. When the dynamic memory usage is greater than the maximum dynamic memory multiplied by the value of this parameter, the current memory is overloaded. In this case, the overload escape function is triggered to kill sessions and new connections are prohibited. The value ranges from 0 to 100. The value indicates a percentage.
Default value: '0,0', indicating that the escape from memory overload function is disabled.
Example:
resilience_memory_reject_percent = '70,90'
When the memory usage exceeds 90% of the upper limit, new connections are forbidden and stacked sessions are killed. When the memory usage is less than 70% of the upper limit, session killing is stopped and new connections are allowed.
- You can query the maximum dynamic memory and used dynamic memory in the pv_total_memory_detail view. max_dynamic_memory indicates the maximum dynamic memory, and dynamic_used_memory indicates the used dynamic memory.
- If this parameter is set to a small value, the escape from memory overload process is frequently triggered. As a result, ongoing sessions are forcibly logged out, and new connections fail to be connected for a short period of time. Therefore, exercise caution when setting this parameter based on the actual memory usage.
- If the use_workload_manager parameter is disabled and the bypass_workload_manager parameter is enabled, this parameter also takes effect. The bypass_workload_manager parameter is of the SIGHUP type; therefore, after the reload mode is set, you need to restart the database for the setting to take effect.
- The values of recover_memory_percent and overload_memory_percent can be 0 at the same time. In addition, the value of recover_memory_percent must be smaller than that of overload_memory_percent. Otherwise, the setting does not take effect.
resilience_escape_user_permissions
Parameter description: Specifies the escape permission of users. You can set it for multiple users and separate users by commas (,). The value sysadmin indicates that jobs of the sysadmin user can be canceled by the escape function. The value monadmin indicates that jobs of the monadmin user can be canceled by the escape function. By default, this parameter is left blank, indicating that the escape function of the sysadmin and monadmin users is disabled. The value can only be sysadmin, monadmin, or an empty string. This is a SIGHUP parameter. Set it based on instructions in Table 1.
Value range: a string, consisting of one or more characters.
- sysadmin: Jobs of the sysadmin user can be canceled by the escape function.
- monadmin: Jobs of the monadmin user can be canceled by the escape function.
- '': The escape function of the sysadmin and monadmin users is disabled.
Default value: '', indicating that the escape function of the sysadmin and monadmin users is disabled.
Example:
resilience_escape_user_permissions = 'sysadmin,monadmin'
The escape function is enabled for both the sysadmin and monadmin users.
- You can set this parameter to multiple values separated by commas (,), for example, resilience_escape_user_permissions = 'sysadmin,monadmin'. You can also set this parameter to only one value, for example, resilience_escape_user_permissions = 'monadmin'.
- If this parameter is set for multiple times, the latest setting takes effect.
- If this parameter is set to any value in the value range, common users support the escape function.
- If a user has both the sysadmin and monadmin role permissions, the escape function of the user can be triggered only when resilience_escape_user_permissions is set to 'sysadmin,monadmin'.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot