Help Center/Cloud Container Engine/Best Practices/Networking/Configuring an Application That Acts as a Gateway
Updated on 2026-03-10 GMT+08:00

Configuring an Application That Acts as a Gateway

Application Scenarios

Applications like Nginx and API gateway that act as gateways serve as the central traffic entry of systems. They provide essential functions, including scheduling, request forwarding, and authentication, particularly crucial for handling high-concurrency traffic.

When a system receives a large number of external requests within a short period (for example, during peak hours such as promotional events, system switchover, or scheduled tasks), a gateway node must manage a rapid influx of external connections. This results in a sharp increase in entries within the connection tracking table. If the table reaches its capacity, the kernel may begin to drop new incoming requests. To prevent connection loss and ensure service stability, it is necessary to ensure that the connection tracking table has sufficient capacity to handle traffic surges.

This section provides suggestions on configuring an application that acts as a gateway. The suggestions include:

Optimizing the kube-proxy Configuration

You are advised to adjust the connection tracking table size and timeout settings of kube-proxy based on the request volume handled by the application. This ensures that the connection tracking table capacity can accommodate peak traffic loads and prevents request failures caused by table exhaustion.

To modify the kube-proxy parameters on a node, take the following steps:

  1. Log in to the CCE console.
  2. Click the cluster name to access the cluster console. In the navigation pane, choose Nodes. In the right pane, click the Node Pools tab.
  3. Locate the row containing the target node pool and click Manage.
  4. In the window that slides out from the right, modify the kube-proxy parameters.

    Item

    Parameter

    Description

    Value

    Modification

    Configuration Method

    Minimum Connection Tracking Entries

    conntrack-min

    The minimum number of connection tracking entries that can be reserved in a Linux conntrack table. It guarantees that the system maintains a fixed number of connection tracking entries at all times, even when the actual number of connections is low. This prevents performance overhead associated with dynamically allocating or releasing resources.

    To obtain the value, run the following command:

    sysctl net.nf_conntrack_max

    Default: 131072

    None

    Console/API

    Wait Time for a Closed TCP Connection

    conntrack-tcp-timeout-close-wait

    Wait time of a closed TCP connection

    To obtain the value, run the following command:

    sysctl net.netfilter.nf_conntrack_tcp_timeout_close_wait

    Default: 1h0m0s

    None

    Console/API

    IPVS Scheduling Policy

    node-ipvs-scheduler

    The IPVS load-balancing algorithm for control plane nodes. If this parameter is not set, the round robin (RR) algorithm will be used by default.

    The options are as follows:

    • rr: The RR algorithm, which allocates requests to backend servers in sequence. When every server has the same weight, requests are evenly distributed.
    • lc: The least connections algorithm, which allocates each new request to the backend server that is currently handling the fewest active connections.
    • dh: The destination hashing algorithm, which hashes requests according to their destination IP addresses and allocates the requests with the same destination IP address to the same backend server.
    • sh: The source hashing algorithm, which hashes requests according to their source IP addresses and allocates the requests with the same source IP address to the same backend server.
    • sed: The shortest expected delay (SED) algorithm, which allocates requests to servers with higher weights. In this way, powerful, lightly-loaded servers can get new requests sooner.
    • nq: The never queue algorithm, which skips any calculation when a backend server has zero active connections and immediately assigns the request to that idle server, eliminating scheduling delay for idle servers.

    Default: rr

    Clusters of v1.28.15-r60, v1.29.15-r20, v1.30.14-r20, v1.31.10-r20, v1.32.6-r20, v1.33.5-r10, or later support this configuration.

    Console/API

    For details, see Modifying Node Pool Configurations.

  5. Click OK.

    If there are many node pools in the cluster, you need to ensure that the application can be scheduled to the node pool where the kube-proxy parameters have been modified. For details, see Configuring Node Affinity Scheduling (nodeAffinity).

Configuring Alarms

Alarm Settings

  1. Install the CCE Node Problem Detector add-on in the cluster. This add-on monitors node events. Pay attention to the check metric ConntrackFullProblem. The default alarm threshold of this metric is 90%. For details, see CCE Node Problem Detector.
  2. Install the Cloud Native Cluster Monitoring add-on and enable Report Monitoring Data to AOM. For details, see Cloud Native Cluster Monitoring.
  3. Enable Alarm Center. The default rule in the Alarm Center contains the Node conntrack table full alarm. For details, see Configuring Alarms in Alarm Center.

    Alarm Item

    Description

    Alarm Type

    Dependency

    PromQL

    Node conntrack table full

    Check whether the node's connection tracking table size is sufficient.

    Metric

    Cloud Native Cluster Monitoring

    CCE Node Problem Detector

    problem_gauge{type="ConntrackFullProblem"} >=1

Connection Tracking Table Checks After the Alarm Is Triggered

When a conntrack full error appears in the kernel logs (dmesg), it indicates that the number of connection tracking entries has reached the conntrack_max limit. In this case, it is necessary to increase the maximum number of Linux connection tracking entries.

Check the usage and number of protocols in the connection tracking table.

  • View the details. You can use grep or run cat /proc/net/nf_conntrack.
    conntrack -L
  • Check the entry count.
    cat /proc/sys/net/netfilter/nf_conntrack_count
  • View the maximum table size.
    cat /proc/sys/net/netfilter/nf_conntrack_max

Troubleshooting Suggestions

  • If a large number of TCP connections are being tracked, identify the specific services. For short-lived applications, change them to long-lived applications.
  • Perform pressure testing before service rollout, observe the metrics, and check whether any application connection is not released in a timely manner.
  • Consider scaling out nodes or modifying kube-proxy parameters to prevent connection failures due to a full connection tracking table.