Updated on 2024-09-30 GMT+08:00

Modifying Cluster Configurations

Scenario

CCE allows you to manage cluster parameters, through which you can let core components work under your requirements.

Procedure

  1. Log in to the CCE console. In the navigation pane, choose Clusters.
  2. Locate the target cluster, click ... to view more operations on the cluster, and choose Manage.

    Figure 1 Configuration management

  3. On the Manage Component page, change the values of the Kubernetes parameters listed in the following table.

    Table 1 kube-apiserver configurations

    Item

    Parameter

    Description

    Value

    Toleration time for nodes in NotReady state

    default-not-ready-toleration-seconds

    Specifies the default tolerance time. The configuration takes effect for all pods by default. You can configure different tolerance time for pods. In this case, the tolerance time configured for the pod is used. For details, see Configuring Tolerance Policies.

    If the specified tolerance time is too short, pods may be frequently migrated in scenarios like a network jitter. If the specified tolerance time is too long, services may be interrupted during this period after the node is faulty.

    Default: 300s

    Toleration time for nodes in unreachable state

    default-unreachable-toleration-seconds

    Specifies the default tolerance time. The configuration takes effect for all pods by default. You can configure different tolerance time for pods. In this case, the tolerance time configured for the pod is used. For details, see Configuring Tolerance Policies.

    If the specified tolerance time is too short, pods may be frequently migrated in scenarios like a network jitter. If the specified tolerance time is too long, services may be interrupted during this period after the node is faulty.

    Default: 300s

    Maximum Number of Concurrent Modification API Calls

    max-mutating-requests-inflight

    Maximum number of concurrent mutating requests. When the value of this parameter is exceeded, the server rejects requests.

    The value 0 indicates that there is no limitation on the maximum number of concurrent modification requests. This parameter is related to the cluster scale. You are advised not to change the value.

    Manual configuration is no longer supported since cluster v1.21. The value is automatically specified based on the cluster scale.

    • 200 for clusters with 50 or 200 nodes
    • 500 for clusters with 1000 nodes
    • 1000 for clusters with 2000 nodes

    Maximum Number of Concurrent Non-Modification API Calls

    max-requests-inflight

    Maximum number of concurrent non-mutating requests. When the value of this parameter is exceeded, the server rejects requests.

    The value 0 indicates that there is no limitation on the maximum number of concurrent non-modification requests. This parameter is related to the cluster scale. You are advised not to change the value.

    Manual configuration is no longer supported since cluster v1.21. The value is automatically specified based on the cluster scale.

    • 400 for clusters with 50 or 200 nodes
    • 1000 for clusters with 1000 nodes
    • 2000 for clusters with 2000 nodes

    NodePort port range

    service-node-port-range

    NodePort port range. After changing the value, go to the security group page and change the TCP/UDP port range of node security groups 30000 to 32767. Otherwise, ports other than the default port cannot be accessed externally.

    If the port number is smaller than 20106, a conflict may occur between the port and the CCE health check port, which may further lead to unavailable cluster. If the port number is greater than 32767, a conflict may occur between the port and the ports in net.ipv4.ip_local_port_range, which may further affect the network performance.

    Default: 30000 to 32767

    Value range:

    Min > 20105

    Max < 32768

    Overload Control

    support-overload

    Cluster overload control. After this function is enabled, concurrent requests will be dynamically controlled based on the resource demands received by master nodes to ensure the stable running of the master nodes and the cluster.

    This parameter is available only in clusters of v1.23 or later.

    • false: Overload control is disabled.
    • true: Overload control is enabled.

    Node Restriction Add-on

    enable-admission-plugin-node-restriction

    This add-on allows the kubelet of a node to operate only the objects of the current node for enhanced isolation in multi-tenant scenarios or the scenarios with high security requirements.

    This parameter is available only in clusters of v1.23.14-r0, v1.25.9-r0, v1.27.6-r0, v1.28.4-r0, or later versions.

    Default: true

    Pod Node Selector Add-on

    enable-admission-plugin-pod-node-selector

    This add-on allows cluster administrators to configure the default node selector through namespace annotations. In this way, pods run only on specific nodes and configurations are simplified.

    This parameter is available only in clusters of v1.23.14-r0, v1.25.9-r0, v1.27.6-r0, v1.28.4-r0, or later versions.

    Default: true

    Pod Toleration Limit Add-on

    enable-admission-plugin-pod-toleration-restriction

    This add-on allows cluster administrators to configure the default value and limits of pod tolerations through namespaces for fine-grained control over pod scheduling and key resource protection.

    This parameter is available only in clusters of v1.23.14-r0, v1.25.9-r0, v1.27.6-r0, v1.28.4-r0, or later versions.

    Default: false

    API Audience Settings

    api-audiences

    Audiences for a service account token. The Kubernetes component for authenticating service account tokens checks whether the token used in an API request specifies authorized audiences.

    Configuration suggestion: Accurately configure audiences according to the communication needs among cluster services. By doing so, the service account token is used for authentication only between authorized services, which enhances security.

    NOTE:

    An incorrect configuration may lead to an authentication communication failure between services or an error during token verification.

    This parameter is available only in clusters of v1.23.16-r0, v1.25.11-r0, v1.27.8-r0, v1.28.6-r0, v1.29.2-r0, or later versions.

    Default value: "https://kubernetes.default.svc.cluster.local"

    Multiple values can be configured, which are separated by commas (,).

    Service Account Token Issuer Identity

    service-account-issuer

    Entity identifier for issuing a service account token, which is the value identified by the iss field in the payload of the service account token.

    Configuration suggestion: Ensure the configured issuer URL can be accessed in the cluster and trusted by the authentication system in the cluster.

    NOTE:

    If your specified issuer URL is untrusted or inaccessible, the authentication process based on the service account may fail.

    This parameter is available only in clusters of v1.23.16-r0, v1.25.11-r0, v1.27.8-r0, v1.28.6-r0, v1.29.2-r0, or later versions.

    Default value: "https://kubernetes.default.svc.cluster.local"

    Multiple values can be configured, which are separated by commas (,).

    Table 2 Scheduler configurations

    Item

    Parameter

    Description

    Value

    Default cluster scheduler

    default-scheduler

    • kube-scheduler scheduler: provides the standard scheduling capability of the community.
    • volcano scheduler: compatible with kube-scheduler scheduling capabilities and provides enhanced scheduling capabilities. For details, see Volcano Scheduling.

    Default: kube-scheduler

    QPS for communicating with kube-apiserver

    kube-api-qps

    QPS for communicating with kube-apiserver.

    • If the number of nodes in a cluster is less than 1000, the default value is 100.
    • If the number of nodes in a cluster is 1000 or more, the default value is 200.

    Burst for communicating with kube-apiserver

    kube-api-burst

    Burst for communicating with kube-apiserver.

    • If the number of nodes in a cluster is less than 1000, the default value is 100.
    • If the number of nodes in a cluster is 1000 or more, the default value is 200.

    Whether to enable GPU sharing

    enable-gpu-share

    Whether to enable GPU sharing. This parameter is supported only in clusters of v1.23.7-r10, v1.25.3-r0, or later versions.

    • When disabled, ensure that pods in the cluster cannot use shared GPUs (no cce.io/gpu-decision annotation in pods) and that GPU virtualization is disabled.
    • When enabled, ensure that there is a cce.io/gpu-decision annotation on all pods that use GPU resources in the cluster.

    Default: true

    Table 3 kube-controller-manager configurations

    Item

    Parameter

    Description

    Value

    Number of concurrent processing of deployment

    concurrent-deployment-syncs

    Number of deployment objects that can be synchronized concurrently

    Default: 5

    Concurrent processing number of endpoint

    concurrent-endpoint-syncs

    Number of endpoint syncing operations that will be done concurrently

    Default: 5

    Concurrent number of garbage collector

    concurrent-gc-syncs

    Number of garbage collector workers that can be synchronized concurrently

    Default: 20

    Number of job objects allowed to sync simultaneously

    concurrent-job-syncs

    Number of job objects that can be synchronized concurrently

    Default: 5

    Number of CronJob objects allowed to sync simultaneously

    concurrent-cron-job-syncs

    Number of scheduled jobs that can be synchronized concurrently

    Default: 5

    Number of concurrent processing of namespace

    concurrent-namespace-syncs

    Number of namespace objects that can be synchronized concurrently

    Default: 10

    Concurrent processing number of replicaset

    concurrent-replicaset-syncs

    Number of replica sets that can be synchronized concurrently

    Default: 5

    Number of concurrent processing of resource quota

    concurrent-resource-quota-syncs

    Number of resource quotas that can be synchronized concurrently

    Default: 5

    Concurrent processing number of service

    concurrent-service-syncs

    Number of services that can be synchronized concurrently

    Default: 10

    Concurrent processing number of serviceaccount-token

    concurrent-serviceaccount-token-syncs

    Number of service account token objects that can be synchronized concurrently

    Default: 5

    Concurrent processing of ttl-after-finished

    concurrent-ttl-after-finished-syncs

    Number of ttl-after-finished-controller workers that can be synchronized concurrently

    Default: 5

    RC

    concurrent_rc_syncs (used in clusters of v1.19 or earlier)

    concurrent-rc-syncs (used in clusters of v1.21 through v1.25.3-r0)

    Number of replication controllers that can be synchronized concurrently

    NOTE:

    This parameter is no longer supported in clusters of v1.25.3-r0 and later versions.

    Default: 5

    Cluster elastic computing period

    horizontal-pod-autoscaler-sync-period

    Period for the horizontal pod autoscaler to perform auto scaling on pods. A smaller value will result in a faster auto scaling response and higher CPU load.

    NOTE:

    Make sure to configure this parameter properly as a lengthy period can cause the controller to respond slowly, while a short period may overload the cluster control plane.

    Default: 15 seconds

    Horizontal Pod Scaling Tolerance

    horizontal-pod-autoscaler-tolerance

    The configuration determines how quickly the horizontal pod autoscaler will act to auto scaling policies. If the parameter is set to 0, auto scaling will be triggered immediately when the related metrics are met.

    Configuration suggestion: If the service resource usage increases sharply over time, retain a certain tolerance to prevent auto scaling which is beyond expectation in high resource usage scenarios.

    Default: 0.1

    HPA CPU Initialization Period

    horizontal-pod-autoscaler-cpu-initialization-period

    During the period specified by this parameter, the CPU usage data used in HPA calculation is limited to pods that are both ready and have recently had their metrics collected. You can use this parameter to filter out unstable CPU usage data during the early stage of pod startup. This helps prevent incorrect scaling decisions based on momentary peak values.

    Configuration suggestion: If you find that HPA is making incorrect scaling decisions due to CPU usage fluctuations during pod startup, increase the value of this parameter to allow for a buffer period of stable CPU usage.

    NOTE:

    Make sure to configure this parameter properly as a small value may trigger unnecessary scaling based on peak CPU usage, while a large value may cause scaling to be delayed.

    This parameter is available only in clusters of v1.23.16-r0, v1.25.11-r0, v1.27.8-r0, v1.28.6-r0, v1.29.2-r0, or later versions.

    Default: 5 minutes

    HPA Initial Readiness Delay

    horizontal-pod-autoscaler-initial-readiness-delay

    After CPU initialization, this period allows HPA to use a less strict criterion for filtering CPU metrics. During this period, HPA will gather data on the CPU usage of the pod for scaling, regardless of any changes in the pod's readiness status. This parameter ensures continuous tracking of CPU usage, even when the pod status changes frequently.

    Configuration suggestion: If the readiness status of pods fluctuates after startup and you want to prevent HPA misjudgment caused by the fluctuation, increase the value of this parameter to allow HPA to gather more comprehensive CPU usage data.

    NOTE:

    Configure this parameter properly. If it is set to a small value, an unnecessary scale-out may occur due to CPU data fluctuations when the pod enters the ready state. If it is set to a large value, HPA may not be able to make a quick decision when a rapid response is needed.

    This parameter is available only in clusters of v1.23.16-r0, v1.25.11-r0, v1.27.8-r0, v1.28.6-r0, v1.29.2-r0, or later versions.

    Default: 30s

    QPS for communicating with kube-apiserver

    kube-api-qps

    QPS for communicating with kube-apiserver

    • If the number of nodes in a cluster is less than 1000, the default value is 100.
    • If the number of nodes in a cluster is 1000 or more, the default value is 200.

    Burst for communicating with kube-apiserver

    kube-api-burst

    Burst for communicating with kube-apiserver

    • If the number of nodes in a cluster is less than 1000, the default value is 100.
    • If the number of nodes in a cluster is 1000 or more, the default value is 200.

    The maximum number of terminated pods that can be kept before the Pod GC deletes the terminated pod

    terminated-pod-gc-threshold

    Number of terminated pods that can exist in a cluster. If there are more terminated pods than the expected number in the cluster, the terminated pods that exceed the number will be deleted.

    NOTE:

    If this parameter is set to 0, all pods in the terminated state are retained.

    Default: 1000

    Value range: 10 to 12500

    If the cluster version is v1.21.11-r40, v1.23.8-r0, v1.25.6-r0, v1.27.3-r0, or later, the value range is changed to 0 to 100000.

    Unhealthy AZ Threshold

    unhealthy-zone-threshold

    When more than a certain proportion of pods in an AZ are unhealthy, the AZ itself will be considered unhealthy, and scheduling pods to nodes in that AZ will be restricted to limit the impacts of the unhealthy AZ.

    This parameter is available only in clusters of v1.23.14-r0, v1.25.9-r0, v1.27.6-r0, v1.28.4-r0, or later versions.

    NOTE:

    If the parameter is set to a large value, pods in unhealthy AZs will be migrated in a large scale, which may lead to risks such as overloaded clusters.

    Default: 0.55

    Value range: 0 to 1

    Node Eviction Rate

    node-eviction-rate

    This parameter specifies the number of nodes that pods are deleted from per second in a cluster when the AZ is healthy. The default value is 0.1, indicating that pods can be evicted from at most one node every 10 seconds.

    This parameter is available only in clusters of v1.23.14-r0, v1.25.9-r0, v1.27.6-r0, v1.28.4-r0, or later versions.

    NOTE:

    If the parameter is set to a large value, the cluster may be overloaded. Additionally, if too many pods are evicted, they cannot be rescheduled, which will slow down fault recovery.

    Default: 0.1

    Secondary Node Eviction Rate

    secondary-node-eviction-rate

    This parameter specifies the number of nodes that pods are deleted from per second in a cluster when the AZ is unhealthy. The default value is 0.01, indicating that pods can be evicted from at most one node every 100 seconds.

    This parameter is available only in clusters of v1.23.14-r0, v1.25.9-r0, v1.27.6-r0, v1.28.4-r0, or later versions.

    NOTE:

    There is no need to set the parameter to a large value for nodes in an unhealthy AZ, and this configuration may result in overloaded clusters.

    Default: 0.01

    Configure this parameter with node-eviction-rate and set it to one-tenth of node-eviction-rate.

    Large Cluster Threshold

    large-cluster-size-threshold

    If the number of nodes in a cluster is greater than the value of this parameter, this is a large cluster.

    This parameter is available only in clusters of v1.23.14-r0, v1.25.9-r0, v1.27.6-r0, v1.28.4-r0, or later versions.

    NOTE:

    kube-controller-manager automatically adjusts configurations for large clusters to optimize the cluster performance. Therefore, an excessively small threshold for small clusters will deteriorate the cluster performance.

    Default: 50

    For the clusters with a large number of nodes, configure a relatively larger value than the default one for higher performance and faster responses of controllers. Retain the default value for small clusters. Before adjusting the value of this parameter in a production environment, check the impact of the change on cluster performance in a test environment.

    Table 4 Networking components (available only for CCE Turbo clusters)

    Item

    Parameter

    Description

    Value

    The minimum number of network cards bound to the container at the cluster level

    nic-minimum-target

    Minimum number of container ENIs bound to a node

    The parameter value must be a positive integer. The value 10 indicates that at least 10 container ENIs must be bound to a node. If the number you specified exceeds the container ENI quota of the node, the ENI quota will be used.

    Default: 10

    Cluster-level node preheating container NIC upper limit check value

    nic-maximum-target

    After the number of ENIs bound to a node exceeds the nic-maximum-target value, CCE will not proactively pre-bind ENIs.

    Checking the upper limit of pre-bound container ENIs is enabled only when the value of this parameter is greater than or equal to the minimum number of container ENIs (nic-minimum-target) bound to a node.

    The parameter value must be a positive integer. The value 0 indicates that checking the upper limit of pre-bound container ENIs is disabled. If the number you specified exceeds the container ENI quota of the node, the ENI quota will be used.

    Default: 0

    Number of NICs for dynamically warming up containers at the cluster level

    nic-warm-target

    Extra ENIs will be pre-bound after the nic-minimum-target is used up in a pod. The value can only be a number.

    When the sum of the nic-warm-target value and the number of ENIs bound to the node is greater than the nic-maximum-target value, CCE will pre-bind the number of ENIs specified by the difference between the nic-maximum-target value and the current number of ENIs bound to the node.

    Default: 2

    Cluster-level node warm-up container NIC recycling threshold

    nic-max-above-warm-target

    Only when the difference between the number of idle ENIs on a node and the nic-warm-target value is greater than the threshold, the pre-bound ENIs will be unbound and reclaimed. The value can only be a number.

    • A large value will accelerate pod startup but slow down the unbinding of idle container ENIs and decrease the IP address usage. Exercise caution when performing this operation.
    • A small value will speed up the unbinding of idle container ENIs and increase the IP address usage but will slow down pod startup, especially when a large number of pods increase instantaneously.

    Default: 2

    Low threshold of the number of container ENIs bound to a node in a cluster

    prebound-subeni-percentage

    High threshold of the number of bound ENIs

    NOTE:

    This parameter is being discarded. Use the dynamic pre-binding parameters of the other four ENIs.

    Default: 0:0

    Table 5 Networking component configurations (supported only by the clusters using a VPC network)

    Item

    Parameter

    Description

    Value

    Retaining the non-masqueraded CIDR block of the original pod IP address

    nonMasqueradeCIDRs

    In a CCE cluster using the VPC network model, if a container in the cluster needs to access externally, the source pod IP address must be masqueraded as the IP address of the node where the pod resides through SNAT. After the configuration, the node will not SNAT the IP addresses in the CIDR block by default. This function is available only in clusters of v1.23.14-r0, v1.25.9-r0, v1.27.6-r0, v1.28.4-r0, or later versions.

    By default, nodes in a cluster do not perform SNAT on packets destined for 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16 that is detected by CCE as a private CIDR block. Instead, these packets are directly transferred using the upper-layer VPC. (The three CIDR blocks are considered as internal networks in the cluster and are reachable at Layer 3 by default.)

    Default: 10.0.0.0/8, 172.16.0.0/12, or 192.168.0.0/16

    NOTE:

    To enable cross-node pod access, the CIDR block of the node where the target pod runs must be added.

    Similarly, to enable cross-ECS pod access in a VPC, the CIDR block of the ECS where the target pod runs must be added.

    Table 6 Extended controller configurations (supported only by clusters of v1.21 and later)

    Item

    Parameter

    Description

    Value

    Enable resource quota management

    enable-resource-quota

    Indicates whether to automatically create a ResourceQuota when creating a namespace. With quota management, you can control the number of workloads of each type and the upper limits of resources in a namespace or related dimensions.

    • false: Auto creation is disabled.
    • true: Auto creation is enabled. For details about the resource quota defaults, see Configuring Resource Quotas.
      NOTE:

      In high-concurrency scenarios (for example, creating pods in batches), the resource quota management may cause some requests to fail due to conflicts. Do not enable this function unless necessary. To enable this function, ensure that there is a retry mechanism in the request client.

    Default: false

  4. Click OK.