Help Center/ Cloud Container Engine/ FAQs/ Networking/ Network Exception Troubleshooting/ How Do I Restore a Faulty Container Network Interface?
Updated on 2026-01-05 GMT+08:00

How Do I Restore a Faulty Container Network Interface?

CCE Turbo clusters use Cloud Native Network 2.0. In such a cluster, each pod is allocated a network interface during its creation. Restoration steps depend on the error events.

Error "exit status -1" Reported During Pod Creation

This error indicates that the network configuration set by the container network add-on for the pod is incorrect. The root cause is that the node is overloaded or an error occurs when the related ip or iptables command is executed while a large number of pods are concurrently created on a single node. This event does not need to be handled. kubelet automatically retries, and eventually the pod starts properly. If the pod keeps reporting errors and cannot be started, submit a service ticket and connect O&M personnel.

Error "timed out waiting for the condition [arping timeout]" Reported During Pod Creation

This error indicates that the network interface allocated to the pod cannot access the subnet gateway of the pod using arping. The root cause is a fault in the underlying VPC network. If the VPC network is not restored within 10 minutes, CCE will allocate a new network interface to the pod. To restore the pod quickly, rebuild the pod. If the error persists, submit a service ticket and contact O&M personnel.

Error "timed out waiting for the condition [vlan is exist]" Reported During Pod Creation

This error occurs because VLAN sub-interfaces with the same VLAN ID already exist on the node. As a result, the VLAN sub-interfaces of the supplementary network interface cannot be created. Ensure that you did not create a duplicate VLAN sub-interface. To restore the pod quickly, rebuild the pod. If the error persists, submit a service ticket and contact O&M personnel.

Error "timed out waiting for the condition [port unavailable]" Reported During Pod Creation

This error indicates that the network interface allocated to the pod cannot be found on the node. The root cause is a fault in the underlying BMS network. To restore the pod quickly, rebuild the pod. If the error persists, submit a service ticket and contact O&M personnel.

Error "no eni bound to pod" Reported During Pod Creation

This error indicates that no network interface is allocated to the pod. You need to check other events on the pod to further locate the fault.

  • Error "[insufficient IP addresses in subnets]"

    This error occurs because the IP addresses of the subnet configured by the container network for the pods are exhausted, or no subnet is configured. To solve the problem:

    1. Log in to the CCE console and click the cluster name to access the cluster console.
    2. In the Networking Configuration area, check the subnet and whether there are enough IP addresses.

    3. If no IP address is available, click Edit and select a container subnet in the same VPC. You can add multiple container subnets at a time. If no other subnets are available, create one on the VPC console.
  • Error "[insufficient NICs on node]"

    This error occurs because the number of network interfaces associated with the node has reached the maximum quota defined by the node specifications. The root cause is that you have associated network interfaces with the node, which occupies the network interface quota. To associate extra network interfaces, modify the maxPods setting of the node pool. For details, see Modifying Node Core Component Settings in a Node Pool.

  • Error "[insufficient secgroup referred by nic]"

    This error occurs because the number of network interfaces associated with the security group used by the pod exceeds the upper limit. Pre-bound network interfaces on the node also use the security group of default-network. Properly configure the dynamic pre-binding of network interfaces to prevent excessive pre-bound network interfaces from occupying the quota of network interfaces associated with the security group. For details, see Pre-Binding Container Elastic Network Interfaces for CCE Turbo Clusters.

    If the number of pods exceeds the maximum network interfaces per security group, configure different security groups for different node pools. For details, see Modifying Node Core Component Settings in a Node Pool.

  • Error "Security group xxx does not exist"

    This error occurs because the security group ID in the container network does not exist. To solve the problem:

    1. Log in to the CCE console and click the cluster name to access the cluster console.
    2. On the Settings page, click the Network tab.
    3. In the Container Network area, update the security group.
    4. After the update is complete, rebuild the pod.

Pod Created and Started Successfully but Network Abnormal Later

If a pod is created and started but the network disconnects later, possible causes include:

  • Security group or ACL rules deny network access. Ensure that security group or ACL rules are correct.
  • Access the subnet gateway of the pod in the container using arping.
    arping -I eth0 <$pod-subnet-gateway>

    If this command fails, the container network of the pod on the node is deleted by mistake or the underlying VPC network is abnormal. To restore the pod, rebuild the pod or submit a service ticket and contact O&M personnel.

  • The reuse of the IP addresses of the old and new nodes triggers an issue known by the community. If a DaemonSet pod is created before a new node in a CCE Turbo cluster, the network interface allocated to the pod may be deleted. In this case, rebuild the pod.