Updated on 2024-11-11 GMT+08:00

Cloud Native 2.0 Network Model

Model Definition

Cloud Native 2.0 network model is a proprietary, next-generation container network model that combines the elastic network interfaces (ENIs) and supplementary network interfaces (sub-ENIs) of the Virtual Private Cloud (VPC). This allows ENIs or sub-ENIs to be directly bound to pods, giving each pod its own unique IP address within the VPC. Furthermore, it supports additional features like ELB passthrough container, pod binding to a security group, and pod binding to an EIP. Because container tunnel encapsulation and NAT are not required, the Cloud Native 2.0 network model enables higher network performance than the container tunnel network model and VPC network model.

Figure 1 Cloud Native 2.0 network model

In a cluster using the Cloud Native 2.0 network model, pods rely on ENIs or sub-ENIs to connect to external networks.

  • Pods running on BMS nodes use ENIs.
  • Pods running on ECS nodes use sub-ENIs that are bound to the ECS' ENIs through VLAN sub-interfaces.
  • To run a pod, it is necessary to bind ENIs to it. The number of pods that can run on a node depends on the number of ENIs that can be bound to the node and the number of ENI ports available on the node.
  • Traffic for communications between pods on a node, communications between pods on different nodes, and access to networks outside a cluster is forwarded through the ENI or sub-ENI of the VPC.

Notes and Constraints

This network model is available only to CCE Turbo clusters.

Advantages and Disadvantages

Advantages

  • VPCs serve as the foundation for constructing container networks. Every pod has its own network interface and IP address, which simplifies network problem-solving and enhances performance.
  • In the same VPC, ENIs are directly bound to pods in a cluster, so that resources outside the cluster can directly communicate with containers within the cluster.

    Similarly, if the VPC is accessible to other VPCs or data centers, resources in other VPCs or data centers can directly communicate with containers in the cluster, provided there are no conflicts between the network CIDR blocks.

  • The load balancing, security group, and EIP capabilities provided by VPC can be directly used by pods.

Disadvantages

Container networks are built on VPCs, with each pod receiving an IP address from the VPC CIDR block. As a result, it is crucial to plan the container CIDR block carefully before creating a cluster.

Application Scenarios

  • High performance requirements: Cloud Native 2.0 networks use VPC networks to construct container networks, eliminating the need for tunnel encapsulation or NAT when containers communicate. This makes Cloud Native 2.0 networks ideal for scenarios that demand high bandwidth and low latency, such as live streaming and e-commerce flash sales.
  • Large-scale networking: Cloud Native 2.0 networks support a maximum of 2,000 ECS nodes and 100,000 pods.

Recommendation for CIDR Block Planning

As explained in Cluster Network Structure, network addresses in a cluster are divided into the cluster network, container network, and service network. When planning network addresses, consider the following factors:

  • All subnets (including extended subnets) in the VPC where the cluster resides cannot conflict with the Service CIDR blocks.
  • Ensure that each CIDR block has sufficient IP addresses.
    • The IP addresses in the cluster CIDR block must match the cluster scale. Otherwise, nodes cannot be created due to insufficient IP addresses.
    • The IP addresses in the container CIDR block must match the service scale. Otherwise, pods cannot be created due to insufficient IP addresses.

In the Cloud Native 2.0 network model, the container CIDR block and node CIDR block share the network IP addresses in a VPC. It is recommended that the container subnet and node subnet not use the same subnet. Otherwise, containers or nodes may fail to be created due to insufficient IP addresses.

In addition, a subnet can be added to the container CIDR block after a cluster is created to increase the number of available IP addresses. In this case, ensure that the added subnet does not conflict with other subnets in the container CIDR block.

Example of Cloud Native Network 2.0 Access

In this example, a CCE Turbo cluster is created, and the cluster contains three ECS nodes.

You can check the basic information about a node on the ECS console. You can see that a primary network interface and an extended network interface are bound to the node. Both of them are ENIs. The IP address of the extended network interface belongs to the container CIDR block and is used to bind sub-ENIs to pods on the node.

The following is an example of creating a workload in a cluster using the Cloud Native 2.0 network model:

  1. Use kubectl to access the cluster. For details, see Connecting to a Cluster Using kubectl.
  2. Create a Deployment in the cluster.

    Create the deployment.yaml file. The following shows an example:

    kind: Deployment
    apiVersion: apps/v1
    metadata:
      name: example
      namespace: default
    spec:
      replicas: 6
      selector:
        matchLabels:
          app: example
      template:
        metadata:
          labels:
            app: example
        spec:
          containers:
            - name: container-0
              image: 'nginx:perl'
              resources:
                limits:
                  cpu: 250m
                  memory: 512Mi
                requests:
                  cpu: 250m
                  memory: 512Mi
          imagePullSecrets:
            - name: default-secret

    Create the workload.

    kubectl apply -f deployment.yaml

  3. Check the running pods.

    kubectl get pod -owide

    Command output:

    NAME                       READY   STATUS    RESTARTS   AGE   IP            NODE         NOMINATED NODE   READINESS GATES
    example-5bdc5699b7-54v7g   1/1     Running   0          7s    10.1.18.2     10.1.0.167   <none>           <none>
    example-5bdc5699b7-6dzx5   1/1     Running   0          7s    10.1.18.216   10.1.0.186   <none>           <none>
    example-5bdc5699b7-gq7xs   1/1     Running   0          7s    10.1.16.63    10.1.0.144   <none>           <none>
    example-5bdc5699b7-h9rvb   1/1     Running   0          7s    10.1.16.125   10.1.0.167   <none>           <none>
    example-5bdc5699b7-s9fts   1/1     Running   0          7s    10.1.16.89    10.1.0.144   <none>           <none>
    example-5bdc5699b7-swq6q   1/1     Running   0          7s    10.1.17.111   10.1.0.167   <none>           <none>

    The IP addresses of all pods are sub-ENIs, which are bound to the ENI (extended network interface) of the node.

    For example, the IP address of the extended network interface of node 10.1.0.167 is 10.1.17.172. On the network interfaces console, you can see that there are three sub-ENIs bound to the extended network interface whose IP address is 10.1.17.172. These sub-ENIs are the IP addresses of the pods running on the node.

  4. Log in to an ECS in the same VPC and access the IP address of a pod from outside the cluster.

    In this example, the accessed pod IP address is 10.1.18.2.

    curl 10.1.18.2

    If the following information is displayed, the workload can be properly accessed:

    <!DOCTYPE html>
    <html>
    <head>
    <title>Welcome to nginx!</title>
    <style>
        body {
            width: 35em;
            margin: 0 auto;
            font-family: Tahoma, Verdana, Arial, sans-serif;
        }
    </style>
    </head>
    <body>
    <h1>Welcome to nginx!</h1>
    <p>If you see this page, the nginx web server is successfully installed and
    working. Further configuration is required.</p>
    
    <p>For online documentation and support please refer to
    <a href="http://nginx.org/">nginx.org</a>.<br/>
    Commercial support is available at
    <a href="http://nginx.com/">nginx.com</a>.</p>
    
    <p><em>Thank you for using nginx.</em></p>
    </body>
    </html>