Statement Behavior
This section describes the default parameters involved in the execution of SQL statements.
search_path
Definition: Sets the order in which schemas are searched when a referenced object does not specify a schema. Its value consists of one or more schema names, with different schema names separated by commas.
- When the current session stores the schema of a temporary table, the alias pg_temp can be used to include it in the search path, for example, pg_temp, public. The schema storing temporary tables is always searched first, preceding all schemas in pg_catalog and search_path, that is, it has the highest search priority. You are advised not to explicitly set pg_temp in search_path. If pg_temp is specified in search_path but not at the beginning, the system will indicate that the setting is invalid, and pg_temp will still be prioritized. Using the alias pg_temp ensures that the system searches for database objects like tables, views, and data types only within the schema storing temporary tables, excluding functions or operators.
- The schema pg_catalog, where system tables reside, is always searched before all schemas specified in search_path, that is, it has the second-highest search priority (with pg_temp having the highest). You are advised not to explicitly set pg_catalog in search_path. If pg_catalog is specified in search_path but not at the beginning, the system will indicate that the setting is invalid, and pg_catalog will still be secondarily prioritized.
- When creating an object without specifying a particular schema, it is placed in the first schema named in search_path. An error is reported if the search path is empty.
- The SQL function current_schema can detect the effective value of the current search path. This differs from checking the value of search_path since current_schema shows the first valid schema name in search_path.
Range: String

- Setting it to an empty string ('') causes the system to convert it to a pair of double quotes.
- Including double quotes in the setting is considered unsafe. The system converts each double quote into a pair of double quotes.
Default Value: default_db
current_schema
Definition: Sets the current schema.
Range: String
Default Value: default_db
statement_timeout
Definition: If the statement execution time exceeds the time specified by this parameter (timed from when the server receives the command), the statement will report an error and exit execution.
Range: an integer ranging from 0 to 2147483647. The unit is ms.
Default Value: 0
bytea_output
Definition: Sets the output format for values of the bytea type.
Range: enumerated values
- hex: encodes binary data into two hexadecimal digits per byte.
- escape: Traditional PostgreSQL format. It indicates binary strings using ASCII character sequences, while converting those binary strings that cannot be represented as ASCII characters into special escape sequences.
Default Value: hex
xmlbinary
Definition: Sets how binary values are encoded in XML.
Range: enumerated values
- base64
- hex
Default Value: base64
xmloption
Definition: When converting between XML and string values, sets whether document or content is implied.
Range: enumerated values
- document: indicates HTML formatted documents.
- content: indicates ordinary string.
Default Value: content
enable_disk_cache
Definition: Sets whether to enable disk caching and data pre-reading. Currently, data prefetching is only effective for PARQUET/ORC format files, while disk caching is effective for all file formats.
Range: Boolean
- on: enables disk caching and data prefetching.
- off: disables disk caching and data prefetching.
Default Value: on
disk_cache_max_size
Definition: Used to set the total size of the disk cache.
Range: an integer ranging from 512 MB to 1 PB.
Default Value: 5 GB
enable_aio_scheduler
Definition: Controls whether to enable asynchronous I/O scheduling, which is the foundation of asynchronous reading and writing.
Range: Boolean
- on or true indicates that I/O scheduling is enabled.
- off or false indicates that I/O scheduling is disabled.
Default Value: on
runtime_filter_type
Definition: Identifies the type of runtime filter being used.
Range: String
- all: applies all runtime filters except global_filter.
- min_max: only applies the runtime filter in join scenarios, and only generates a min_max filter in such scenarios.
- bloom_filter: only applies the runtime filter in join scenarios, and after meeting certain conditions, generates a bloom_filter for filtering.
- topn_filter: applies the runtime filter in both join scenarios and order by scenarios with limit, but does not apply to external tables.
- global_filter: indicates the cross-DN node runtime filter in join scenarios. Once enabled, it supports generating min_max or bloom_filter filters for data on different DNs.
- none: does not use any runtime filter. Only the original bloom_filter remains effective for filtering.
Default Value: none
enable_meta_scan
Definition: determines whether to enable metaScan during Iceberg queries.
Range: Boolean
- on or true: enables metaScan, meaning the DN distributively retrieves the file list to be scanned during queries.
- off or false: disables metaScan, meaning the CN retrieves the file list to be scanned during queries.
Default Value: true
enable_spill_to_remote_storage
Definition: Controls whether to enable the spill-to-obs feature. When set to true, the feature is enabled, and data spilling to disk is managed by disk cache. If space becomes insufficient, OBS serves as an escape route. When set to false, the old method of writing directly to the local EVS disk is used. This parameter has dependencies—when enable_spill_to_remote_storage is enabled, use_yr_as_block_cache_backend must also be set to false.
Range: Boolean
- on or true: enabled
- off or false: disabled
Default Value: true

- Due to temporary unavailability of YuanRong-dependent features like append buf in version 25.3.0, there are current limitations on using the spill-to-obs feature. When enable_spill_to_remote_storage is enabled, the storage backend of diskcache currently cannot utilize the YuanRong data system. Ensure use_yr_as_block_cache_backend is set to false before enabling spill-to-obs.
- When use_yr_as_block_cache_backend is false, the near-computation cache directly uses the directory under the DN instance for caching data. Pay attention to the system disk space in this case. You are advised to map the path where the DN instances are stored in the function-agent to a high-capacity physical EVS disk to avoid issues like pod evictions due to insufficient system disk space.
staging_folder_expire_time
Definition: Specifies the interval for automatically clearing residual temporary file directories in the ORC and PARQUET tables. By default, residual temporary file directories are automatically cleared seven days later.
Range: an int64 ranging from 12 hours to 1 year
Default Value: 604800
obs_result_format
Definition: Specifies the format of the result set file written to OBS and whether to enable result set file compression.
Type: USERSET
Range: an integer ranging from 0 to 3.
- 0: The result set file is in JSON format and is not compressed.
- 1: The result set file is in JSON format and is compressed using the zstd algorithm.
- 2: The result set file is in ARROW format and is not compressed.
- 3: The result set file is in ARROW format and is compressed using the zstd algorithm.
Default Value: 0
resource_track_level
Definition: Specifies the type of information reported by the query_plan field in the SQL monitoring data. Currently, this parameter is valid only for the SELECT, INSERT, DELETE, UPDATE, and CREATE TABLE AS statements.
Type: USERSET
Range: enumerated values
- query: The query_plan field in the SQL monitoring data reports the explain information.
- perf: The query_plan column in SQL monitoring data reports explain performance information.
Default Value: perf
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot