Updated on 2024-11-11 GMT+08:00

Network

You can configure a default security group and secondary CIDR block for your clusters.

Cluster Network

Table 1 Parameters

Parameter

Description

VPC

VPC where a cluster resides

VPC enables you to provision logically isolated, configurable, manageable virtual networks for cloud servers, cloud containers, and cloud databases. The VPC gives you complete control over your virtual network, allowing you to select your own IP address range, create subnets, configure security groups, and even assign EIPs and allocate bandwidth in your network, enabling secure and easy access to your business system.

VPC CIDR Block

VPC CIDR Block of a cluster

Default Node Subnet

Node subnet of a cluster

A subnet is a network that manages ECS network planes. It supports IP address management and DNS. ECSs in a subnet are assigned with its IP addresses.

By default, ECSs in all subnets of the same VPC can communicate with one another, but ECSs in different VPCs cannot.

You can create a VPC peering connection to enable ECSs in different VPCs to communicate with each other.

Default Node Subnet | IPv4 CIDR Block

Node subnet CIDR block of a cluster

Network Model

Network model of a cluster

After a cluster is created, its network model cannot be changed. For details about the comparison between different network models, see Overview.

Default Node Security Group

Default security group of the worker nodes in a cluster

You can select a custom security group as the default node security group for a cluster, and you need to allow traffic from specified ports in the security group to ensure normal communications in the cluster.

If the custom security group needs some modifications, the modified security group applies only to newly created or accepted nodes. For existing nodes, you need to manually modify the security group rules.

Retain the non-spoofed CIDR block of the original pod IP address. (available only in clusters using the VPC network model)

In a cluster using a VPC network, 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16 are regarded as private CIDR blocks of the cluster by default. If the VPC to which the cluster resides uses a secondary CIDR block, operations such as creating or resetting a node will also add the secondary CIDR block to the private CIDR blocks.

If a pod tries to access a private CIDR block, the source node will not perform NAT on the pod IP address. Instead, the upper-layer VPC can directly send the pod data packet to the destination, which means, the pod IP address is directly used to communicate with the private CIDR block in the cluster.

This function is available only in clusters of v1.23.14-r0, v1.25.9-r0, v1.27.6-r0, v1.28.4-r0, or later versions. For details, see Retaining the Original IP Address of a Pod.

NOTE:

To enable a node to access a pod in another node, the node CIDR block must be added to this parameter.

Similarly, to enable an ECS to access the IP address of a pod in a cluster that is in the same VPC as the ECS, the ECS CIDR block must be added to this parameter.

Pod's Access to Metadata (available only in CCE Turbo clusters)

Whether to allow pods in a cluster to access the node metadata, such as AZs and enterprise project IDs.ECS Metadata Types. This function is available only in clusters of v1.23.13-r0, v1.25.8-r0, v1.27.5-r0, v1.28.3-r0, or later versions.

  • If a pod is created when the function is enabled, whether it can access metadata depends on the function status.
  • If a pod is created when the function is disabled or in a cluster of an earlier version, it cannot access metadata regardless of the function status. To grant a pod access to metadata, it must be rebuilt while the function is enabled.

Service Settings

Table 2 Parameters

Parameter

Description

Request Forwarding

Forwarding mode of a cluster

After a cluster is created, the service forwarding mode cannot be changed. IPVS and iptables are supported. For details, see Comparing iptables and IPVS.

Service CIDR Block

Each Service in a cluster has its own IP address. When creating a CCE cluster, you can specify the Service address range (Service CIDR block). The Service CIDR block cannot overlap with the subnet or the container CIDR block. The Service CIDR block can be used only within a cluster.

Service Port Range

NodePort port range

The default port range is 30000 to 32767. The port range can be changed to 20106 to 32767. After changing the value, go to the security group page and change the TCP/UDP port range of node security groups 30000 to 32767. Otherwise, ports other than the default port cannot be accessed from external networks.
NOTE:

If the port number is smaller than 20106, a conflict may occur between the port and the system health check port, which may further lead to unavailable cluster. If the port number is greater than 32767, a conflict may occur between the port and the random port of the OS, which may further affect the performance.

Container CIDR Blocks (Available only in Clusters Using the VPC Network Model)

If a container CIDR block configured during cluster creation cannot meet service expansion requirements, you can add more container CIDR blocks. For details, see Adding a Container CIDR Block for a Cluster.

  • This function is available only for clusters of v1.19 or later using a VPC network.
  • An added container CIDR block cannot be deleted.

Custom Container Network (Available only in CCE Turbo Clusters)

If you want different namespaces or workloads to use different subnet CIDR blocks or security groups, you can create a policy to associate subnets or security groups with namespaces or workloads. For details, see Binding a Subnet and Security Group to a Namespace or Workload Using a Container Network Configuration.

  • Associating a pod with a subnet: The pod IP address is restricted in a specific CIDR block. The networks between different namespaces or workloads are isolated from each other.
  • Associating a pod with a security group: You can configure security group rules for pods in the same namespace or workload to customize access policies.

This configuration is supported only by CCE Turbo clusters. Pod subnets can be deleted from clusters of v1.23.17-r0, v1.25.12-r0, v1.27.9-r0, v1.28.7-r0, v1.29.3-r0, or later versions.

Container Network Pre-binding Settings (Available only in CCE Turbo Clusters)

A CCE Turbo cluster requests for and binds an ENI or sub-ENI to each pod. Pods support fast scaling. However, it takes some time to create and bind an ENI to a pod, which slows down the pod startup speed if large-scale ENIs are to be created in batches. Dynamic container ENI pre-binding is enabled by default to speed up pod startup while improving the IP address usage. Cluster pre-binding policies take effect globally. Cluster nodes will pre-bind container ENIs based on the configured policies. To configure a separate pre-binding policy for a group of nodes, enable node pool pre-binding.

This configuration is supported only by CCE Turbo clusters.

All Container ENI Pre-binding

  • After this function is enabled, your cluster nodes will request for and bind the maximum number of ENIs supported by the node flavor. For example, if the maximum number of sub-ENIs supported by s7.large.2 nodes is 16, CCE will dynamically pre-bind 16 sub-ENIs to each node of this flavor.
  • After this function is disabled, you can customize the pre-binding parameters on the console.
    Table 3 Parameters of the dynamic ENI pre-binding policy

    Parameter

    Default Value

    Description

    Suggestion

    nic-minimum-target

    10

    Minimum number of container ENIs bound to a node.

    The parameter value must be a positive integer. The value 10 indicates that there are at least 10 container ENIs bound to a node. If the number you entered exceeds the container ENI quota of the node, the ENI quota will be used.

    Configure these parameters based on the number of pods.

    nic-maximum-target

    0

    If the number of ENIs bound to a node exceeds the value of nic-maximum-target, the system does not proactively pre-bind ENIs.

    If the value of this parameter is greater than or equal to the value of nic-minimum-target, the check on the maximum number of the pre-bound ENIs is enabled. Otherwise, the check is disabled.

    The parameter value must be a positive integer. The value 0 indicates that the check on the upper limit of pre-bound container ENIs is disabled. If the number you entered exceeds the container ENI quota of the node, the ENI quota will be used.

    Configure these parameters based on the number of pods.

    nic-warm-target

    2

    Minimum number of pre-bound ENIs on a node. The value must be a number.

    When the value of nic-warm-target + the number of bound ENIs is greater than the value of nic-maximum-target, the system will pre-bind ENIs based on the difference between the value of nic-maximum-target and the number of bound ENIs.

    Set this parameter to the number of pods that can be scaled out instantaneously within 10 seconds.

    nic-max-above-warm-target

    2

    Only when the number of idle ENIs on a node minus the value of nic-warm-target is greater than the threshold, the pre-bound ENIs will be unbound and reclaimed. The value can only be a number.

    • Setting a larger value of this parameter slows down the recycling of idle ENIs and accelerates pod startup. However, the IP address usage decreases, especially when IP addresses are insufficient. Therefore, exercise caution when increasing the value of this parameter.
    • Setting a smaller value of this parameter accelerates the recycling of idle ENIs and improves the IP address usage. However, when a large number of pods increase instantaneously, the startup of some pods slows down.

    Set this parameter based on the difference between the number of pods that are frequently scaled on most nodes within minutes and the number of pods that are instantly scaled out on most nodes within 10 seconds.