Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
Software Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Cluster Configuration Management

Updated on 2024-01-26 GMT+08:00

Scenario

CCE allows you to manage cluster parameters, through which you can let core components work under your very requirements.

Constraints

This function is supported only in clusters of v1.15 and later. It is not displayed for versions earlier than v1.15.

Procedure

  1. Log in to the CCE console. In the navigation pane, choose Clusters.
  2. Click next to the target cluster.
  3. On the Manage Components page on the right, change the values of the Kubernetes parameters listed in the following table.

    Table 1 kube-apiserver parameters

    Parameter

    Description

    Value

    default-not-ready-toleration-seconds

    Tolerance time when a node is in the NotReady state.

    By default, this tolerance is added to each pod.

    Default: 300s

    default-unreachable-toleration-seconds

    Tolerance time when a node is in the unreachable state.

    By default, this tolerance is added to each pod.

    Default: 300s

    max-mutating-requests-inflight

    Maximum number of concurrent mutating requests. When the value of this parameter is exceeded, the server rejects requests.

    The value 0 indicates no limitation. This parameter is related to the cluster scale. You are advised not to change the value.

    Manual configuration is no longer supported since cluster v1.21. The value is automatically specified based on the cluster scale.

    • 200 for clusters with 50 or 200 nodes
    • 500 for clusters with 1,000 nodes
    • 1000 for clusters with 2,000 nodes

    max-requests-inflight

    Maximum number of concurrent non-mutating requests. When the value of this parameter is exceeded, the server rejects requests.

    The value 0 indicates no limitation. This parameter is related to the cluster scale. You are advised not to change the value.

    Manual configuration is no longer supported since cluster v1.21. The value is automatically specified based on the cluster scale.

    • 400 for clusters with 50 or 200 nodes
    • 1000 for clusters with 1,000 nodes
    • 2000 for clusters with 2,000 nodes

    service-node-port-range

    NodePort port range. After changing the value, go to the security group page and change the TCP/UDP port range of node security groups 30000 to 32767. Otherwise, ports other than the default port cannot be accessed externally.

    Default:

    30000-32767

    Value range:

    Min > 20105

    Max < 32768

    request-timeout

    Default request timeout interval of kube-apiserver. Exercise caution when changing the value of this parameter. Ensure that the changed value is proper to prevent frequent API timeout or other errors.

    This parameter is supported only by clusters of v1.19.16-r30, v1.21.10-r10, v1.23.8-r10, v1.25.3-r10, and later versions.

    Default:

    1m0s

    Value range:

    Min ≥ 1s

    Max ≤ 1 hour

    feature-gates: ServerSideApply

    Whether to enable ServerSideApply of kube-apiserver. For details, see Server-Side Apply.

    This parameter is supported only by clusters of v1.19.16-r30, v1.21.10-r10, v1.23.8-r10, v1.25.3-r10, and later versions.

    Default:

    true

    support-overload

    Cluster overload control. If enabled, concurrent requests are dynamically controlled based on the resource pressure of master nodes to keep them and the cluster available.

    This parameter is supported only by clusters of v1.23 or later.

    • false: Overload control is disabled.
    • true: Overload control is enabled.
    Table 2 kube-scheduler parameters

    Parameter

    Description

    Value

    kube-api-qps

    Query per second (QPS) to use while talking with kube-apiserver.

    • If the number of nodes in a cluster is less than 1000, the default value is 100.
    • If a cluster contains 1000 or more nodes, the default value is 200.

    kube-api-burst

    Burst to use while talking with kube-apiserver.

    • If the number of nodes in a cluster is less than 1000, the default value is 100.
    • If a cluster contains 1000 or more nodes, the default value is 200.

    enable-gpu-share

    Whether to enable GPU sharing. This parameter is supported only by clusters of v1.23.7-r10, v1.25.3-r0, and later.

    • When disabled, ensure that pods in the cluster do not use the shared GPU (that is, the annotation of cce.io/gpu-decision does not exist in pods).
    • When enabled, ensure that the annotation of cce.io/gpu-decision exists in pods that use GPU resources in the cluster.

    Default: true

    Table 3 kube-controller-manager parameters

    Parameter

    Description

    Value

    concurrent-deployment-syncs

    Number of Deployments that are allowed to synchronize concurrently.

    Default: 5

    concurrent-endpoint-syncs

    Number of endpoints that are allowed to synchronize concurrently.

    Default: 5

    concurrent-gc-syncs

    Number of garbage collector workers that are allowed to synchronize concurrently.

    Default: 20

    concurrent-job-syncs

    Number of jobs that can be synchronized at the same time.

    Default: 5

    concurrent-namespace-syncs

    Number of namespaces that are allowed to synchronize concurrently.

    Default: 10

    concurrent-replicaset-syncs

    Number of ReplicaSets that are allowed to synchronize concurrently.

    Default: 5

    concurrent-resource-quota-syncs

    Number of resource quotas that are allowed to synchronize concurrently.

    Default: 5

    concurrent-service-syncs

    Number of Services that are allowed to synchronize concurrently.

    Default: 10

    concurrent-serviceaccount-token-syncs

    Number of service account tokens that are allowed to synchronize concurrently.

    Default: 5

    concurrent-ttl-after-finished-syncs

    Number of TTL-after-finished controller workers that are allowed to synchronize concurrently.

    Default: 5

    concurrent-rc-syncs

    Number of replication controllers that are allowed to synchronize concurrently.

    NOTE:

    This parameter is used only in clusters of v1.21 to v1.23. In clusters of v1.25 and later, this parameter is deprecated (officially deprecated from v1.25.3-r0 on).

    Default: 5

    horizontal-pod-autoscaler-sync-period

    How often HPA audits metrics in a cluster.

    Default: 15 seconds

    kube-api-qps

    Query per second (QPS) to use while talking with kube-apiserver.

    • If the number of nodes in a cluster is less than 1000, the default value is 100.
    • If a cluster contains 1000 or more nodes, the default value is 200.

    kube-api-burst

    Burst to use while talking with kube-apiserver.

    • If the number of nodes in a cluster is less than 1000, the default value is 100.
    • If a cluster contains 1000 or more nodes, the default value is 200.

    terminated-pod-gc-threshold

    Number of terminated pods that can exist before the terminated pod garbage collector starts deleting terminated pods.

    If <= 0, the terminated pod garbage collector is disabled.

    Default: 1000

    Table 4 eni parameters (supported only by CCE Turbo clusters)

    Parameter

    Description

    Value

    nic-minimum-target

    Minimum number of ENIs bound to a node at the cluster level

    Default: 10

    nic-maximum-target

    Maximum number of ENIs pre-bound to a node at the cluster level

    Default: 0

    nic-warm-target

    Number of ENIs pre-bound to a node at the cluster level

    Default: 2

    nic-max-above-warm-target

    Reclaim number of ENIs pre-bound to a node at the cluster level

    Default: 2

    Table 5 Extended controller configuration parameters (supported only by clusters of v1.21 and later)

    Parameter

    Description

    Value

    enable-resource-quota

    Whether to automatically create a resource quota object when creating a namespace.

    • false: no auto creation
    • true: auto creation enabled For details about the resource quota defaults, see Setting a Resource Quota.

    Default: false

  4. Click OK.

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback