Updated on 2024-06-26 GMT+08:00

Optimizable Node System Parameters

CCE provides default node system parameters, which may cause performance bottlenecks in some scenarios. Therefore, you can customize and optimize some node system parameters. Optimizable Node System Parameters describes the node system parameters.

  • The modification has certain risks. Be familiar with Linux commands and Linux OS.
  • The parameters listed in Table 1 have been tested and verified. Do not modify other parameters. Otherwise, node faults may occur.
  • The commands for modifying node system parameters are valid only when public images are used. The commands provided in this document are for reference only when private images are used.
  • After the node is restarted, run the sysctl -p command to update the parameter value.
Table 1 System parameters that can be optimized

Parameter

Parameter Location

Description

Reference

kernel.pid_max

/etc/sysctl.conf

Maximum number of process IDs on a node

Obtaining the parameter:

sysctl kernel.pid_max

Changing Process ID Limits (kernel.pid_max)

RuntimeMaxUse

/etc/systemd/journald.conf

Upper limit of the memory occupied by the node log cache. If this parameter is not set, a large amount of memory will be occupied after the system runs for a long time.

Obtaining the parameter:

cat /etc/systemd/journald.conf | grep RuntimeMaxUse

Changing the RuntimeMaxUse of the Memory Used by the Log Cache on a Node

Openfiles

/etc/security/limits.conf

Maximum number of file handles for a single process on a node, which can be adjusted as required.

Obtaining the parameter:

ulimit -n

Changing the Maximum Number of File Handles for a Single Process on a Node

(inside the Openfiles container)

LimitNOFILE

LimitNPROC

  • CentOS/EulerOS:
    • Docker nodes: /usr/lib/systemd/system/docker.service
    • Docker nodes: /usr/lib/systemd/system/containerd.service
  • Ubuntu:
    • Docker nodes: /lib/systemd/system/docker.service
    • Docker nodes: /lib/systemd/system/containerd.service

Maximum number of file handles for a single process in a container, which can be adjusted as required.

Obtaining the parameter:

Docker nodes:
cat /proc/`pidof dockerd`/limits | grep files
containerd nodes:
cat /proc/`pidof containerd`/limits | grep files

Changing the Maximum Number of File Handles for a Single Container Process

file-max

/etc/sysctl.conf

Maximum number of file handles in the system, which can be adjusted as required.

Obtaining the parameter:

sysctl fs.file-max

Changing the Maximum Number of System-Level File Handles on a Node

nf_conntrack_buckets

nf_conntrack_max

/etc/sysctl.conf

Capacity of the connection tracing table, which can be adjusted as required.

Bucket usage = [nf_conntrack_count]/[nf_conntrack_buckets]

Adjust the buckets value to ensure that the bucket usage is lower than 0.7.

Obtaining the parameter:

sysctl net.netfilter.nf_conntrack_count
sysctl net.netfilter.nf_conntrack_buckets
sysctl net.netfilter.nf_conntrack_max

Modifying Node Kernel Parameters

net.netfilter.nf_conntrack_tcp_timeout_close

/etc/sysctl.conf

Expiration time of the entry of the connection in the close state in the connection tracking table. Shortening the expiration time can speed up the recycling.

Obtaining the parameter:

sysctl net.netfilter.nf_conntrack_tcp_timeout_close

net.netfilter.nf_conntrack_tcp_be_liberal

/etc/sysctl.conf

The parameter value is 0 or 1.

  • 0: The function is disabled. All RST packets that are not in the TCP window are marked as invalid.
  • 1: The function is enabled. Only RST packets that are not in the TCP window are marked as invalid. In containers, enabling this parameter can prevent the bandwidth of TCP connections that have been translated using NAT from being limited.

Obtaining the parameter:

sysctl net.netfilter.nf_conntrack_tcp_be_liberal

tcp_keepalive_time

/etc/sysctl.conf

Interval at which a TCP keepalive message is sent. If this parameter is set to a large value, TCP connections may be suspended in the Close_wait phase for a long time, exhausting system resources.

Obtaining the parameter:

sysctl net.ipv4.tcp_keepalive_time

tcp_max_syn_backlog

/etc/sysctl.conf

Maximum number of TCP half-connections, that is, the maximum number of connections in the SYN_RECV queue.

Obtaining the parameter:

sysctl net.ipv4.tcp_max_syn_backlog

tcp_max_tw_buckets

/etc/sysctl.conf

Specifies the maximum number of sockets in the time-wait state that can exist at any time. If the parameter value is too large, node resources may be exhausted.

Obtaining the parameter:

sysctl net.ipv4.tcp_max_tw_buckets

net.core.somaxconn

/etc/sysctl.conf

Maximum number of TCP connections. This parameter controls the number of TCP connections in a queue. If this parameter is set to a small value, the number of TCP connections is prone to insufficiency. If this parameter is set to a large value, system resources may be wasted because each client waiting for connection in the connection queue occupies certain memory resources.

Obtaining the parameter:

sysctl net.core.somaxconn

max_user_instances

/etc/sysctl.conf

Maximum number of inotify instances allowed for each user. If the parameter value is too small, the number of inotify instances may be insufficient in containers.

Obtaining the parameter:

sysctl fs.inotify.max_user_instances

max_user_watches

/etc/sysctl.conf

Maximum number of directories of all monitoring instances. If the parameter value is too small, the number of directories may be insufficient in containers.

Obtaining the parameter:

sysctl fs.inotify.max_user_watches

netdev_max_backlog

/etc/sysctl.conf

Size of the packet receiving queue of the network protocol stack. If the parameter value is too small, the queue size may be insufficient.

Obtaining the parameter:

sysctl net.core.netdev_max_backlog

net.core.wmem_max

net.core.rmem_max

/etc/sysctl.conf

Memory size (bytes) of the sending and receiving buffer. If this parameter is set to a small value, the memory size may be insufficient in large file scenarios.

Obtaining the parameter:

sysctl net.core.wmem_max
sysctl net.core.rmem_max

net.ipv4.neigh.default.gc_thresh1

net.ipv4.neigh.default.gc_thresh2

net.ipv4.neigh.default.gc_thresh3

/etc/sysctl.conf

Optimization of the garbage collection of ARP entries.

  • gc_thresh1: indicates the minimum number of entries that can be reserved. If the number of entries is less than the gc_thresh1 value, the garbage collector (GC) will not reclaim these entries. Do not modify the default parameter setting.
  • gc_thresh2: When the number of entries exceeds the value of this parameter, the GC will clear the entries that have been stored for more than 5 seconds. Do not modify the default parameter setting.
  • gc_thresh3: indicates the maximum number of non-permanent entries. If the system provides a large number of APIs or is directly connected to a large number of devices, increase the value of this parameter.

Obtaining the parameter:

sysctl net.ipv4.neigh.default.gc_thresh1
sysctl net.ipv4.neigh.default.gc_thresh2
sysctl net.ipv4.neigh.default.gc_thresh3

vm.max_map_count

/etc/sysctl.conf

If this parameter is set to a small value, a message is displayed indicating that the space is insufficient during ELK installation.

Obtaining the parameter:

sysctl vm.max_map_count