Help Center/ Cloud Container Engine/ User Guide/ Old Console/ Workloads/ Configuring QoS Rate Limiting for Inter-Pod Access
Updated on 2022-09-24 GMT+08:00

Configuring QoS Rate Limiting for Inter-Pod Access

Scenario

Bandwidth preemption occurs between different containers deployed on the same node, which may cause service jitter. You can configure QoS rate limiting for inter-pod access to prevent this problem.

Notes and Constraints

You can configure limiting for the container tunnel network, VPC network, and Cloud Native 2.0 Network models. The latter two models must comply with the following restrictions:

  • Only clusters later than v1.19.10 are supported.
  • Only common containers (runC as the container runtime) are supported. Secure containers (Kata as the container runtime) are not supported.
  • Only the rate of access from pods to pods is limited. Access to nodes and external access are not affected.
  • The upper bandwidth limit must be the smaller value between the upper limit of the server model bandwidth and 4.3 Gbit/s.
  • Currently, only rate limit at the Mbit/s or higher level is supported.

Procedure

You can add annotations to a pod to specify its egress and ingress bandwidth.

apiVersion: v1
kind: Pod
metadata:
  annotations:
    kubernetes.io/ingress-bandwidth: 100M
    kubernetes.io/egress-bandwidth: 100M
...
  • kubernetes.io/ingress-bandwidth: ingress bandwidth of the pod
  • kubernetes.io/egress-bandwidth: egress bandwidth of the pod

If these two parameters are not specified, the bandwidth is not limited.