Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
On this page

Show all

CCE Cluster Autoscaler Release History

Updated on 2025-02-22 GMT+08:00
Table 1 Release history for the add-on adapted to clusters v1.31

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.31.8

v1.31

CCE clusters v1.31 are supported.

1.31.1

Table 2 Release history for the add-on adapted to clusters v1.30

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.30.46

v1.30

Fixed some issues.

1.30.1

1.30.19

v1.30

Fixed some issues.

1.30.1

1.30.18

v1.30

Fixed some issues.

1.30.1

1.30.15

v1.30

  • Clusters v1.30 are supported.
  • Added the name of the target node pool to the events.

1.30.1

Table 3 Release history for the add-on adapted to clusters v1.29

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.29.79

v1.29

Fixed some issues.

1.29.1

1.29.54

v1.29

Fixed some issues.

1.29.1

1.29.53

v1.29

Fixed some issues.

1.29.1

1.29.50

v1.29

Added the name of the target node pool to the events.

1.29.1

1.29.17

v1.29

Optimized events.

1.29.1

1.29.13

v1.29

Clusters v1.29 are supported.

1.29.1

Table 4 Release history for the add-on adapted to clusters v1.28

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.28.118

v1.28

Fixed some issues.

1.28.1

1.28.93

v1.28

Fixed some issues.

1.28.1

1.28.92

v1.28

Fixed some issues.

1.28.1

1.28.91

v1.28

Fixed some issues.

1.28.1

1.28.88

v1.28

Added the name of the target node pool to the events.

1.28.1

1.28.55

v1.28

Optimized events.

1.28.1

1.28.51

v1.28

Optimized the logic for generating alarms when resources in a bode pool are sold out.

1.28.1

1.28.22

v1.28

Fixed some issues.

1.28.1

1.28.20

v1.28

Fixed some issues.

1.28.1

1.28.17

v1.28

Fixed the issue that scale-in cannot be performed when there are custom pod controllers in a cluster.

1.28.1

Table 5 Release history for the add-on adapted to clusters v1.27

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.27.149

v1.27

Fixed some issues.

1.27.1

1.27.124

v1.27

Fixed some issues.

1.27.1

1.27.123

v1.27

Fixed some issues.

1.27.1

1.27.122

v1.27

Fixed some issues.

1.27.1

1.27.119

v1.27

Added the name of the target node pool to the events.

1.27.1

1.27.88

v1.27

Optimized events.

1.27.1

1.27.84

v1.27

Optimized the logic for generating alarms when resources in a bode pool are sold out.

1.27.1

1.27.55

v1.27

Fixed some issues.

1.27.1

1.27.53

v1.27

Fixed some issues.

1.27.1

1.27.51

v1.27

Fixed some issues.

1.27.1

1.27.14

v1.27

Fixed the scale-in failure of nodes of different specifications in the same node pool and unexpected PreferNoSchedule taint issues.

1.27.1

Table 6 Release history for the add-on adapted to clusters v1.25

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.25.179

v1.25

Fixed some issues.

1.25.0

1.25.154

v1.25

Fixed some issues.

1.25.0

1.25.153

v1.25

Fixed some issues.

1.25.0

1.25.152

v1.25

Added the name of the target node pool to the events.

1.25.0

1.25.120

v1.25

Optimized events.

1.25.0

1.25.116

v1.25

Optimized the logic for generating alarms when resources in a bode pool are sold out.

1.25.0

1.25.88

v1.25

Fixed some issues.

1.25.0

1.25.86

v1.25

Fixed some issues.

1.25.0

1.25.84

v1.25

Fixed some issues.

1.25.0

1.25.46

v1.25

Fixed the scale-in failure of nodes of different specifications in the same node pool and unexpected PreferNoSchedule taint issues.

1.25.0

1.25.34

v1.25

  • Optimized the method of identifying GPUs and NPUs.
  • Used the remaining node quota of a cluster for the extra nodes that are beyond the cluster scale.

1.25.0

1.25.21

v1.25

  • Fixed the issue that the autoscaler's least-waste is disabled by default.
  • Fixed the issue that the node pool cannot be switched to another pool for scaling out after a scale-out failure and the add-on has to restart.
  • The default taint tolerance duration is changed to 60s.
  • Fixed the issue that scale-out is still triggered after the scale-out rule is disabled.

1.25.0

1.25.11

v1.25

  • Supported anti-affinity scheduling of add-on pods on nodes in different AZs.
  • Added the tolerance time during which the pods with temporary storage volumes cannot be scheduled.
  • Fixed the issue that the number of node pools cannot be restored when scaling group resources are insufficient.

1.25.0

1.25.7

v1.25

  • CCE clusters v1.25 are supported.
  • Modified the memory request and limit of a customized flavor.
  • Enabled to report an event indicating that scaling cannot be performed in a node pool with auto scaling disabled.
  • Fixed the bug that NPU node scale-out is triggered again during scale-out.

1.25.0

Table 7 Release history for the add-on adapted to clusters v1.23

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.23.159

v1.23

Fixed some issues.

1.23.0

1.23.157

v1.23

Fixed some issues.

1.23.0

1.23.156

v1.23

Added the name of the target node pool to the events.

1.23.0

1.23.125

v1.23

Optimized events.

1.23.0

1.23.121

v1.23

Optimized the logic for generating alarms when resources in a bode pool are sold out.

1.23.0

1.23.95

v1.23

Fixed some issues.

1.23.0

1.23.93

v1.23

Fixed some issues.

1.23.0

1.23.91

v1.23

Fixed some issues.

1.23.0

1.23.54

v1.23

Fixed the scale-in failure of nodes of different specifications in the same node pool and unexpected PreferNoSchedule taint issues.

1.23.0

1.23.44

v1.23

  • Optimized the method of identifying GPUs and NPUs.
  • Used the remaining node quota of a cluster for the extra nodes that are beyond the cluster scale.

1.23.0

1.23.31

v1.23

  • Fixed the issue that the autoscaler's least-waste is disabled by default.
  • Fixed the issue that the node pool cannot be switched to another pool for scaling out after a scale-out failure and the add-on has to restart.
  • The default taint tolerance duration is changed to 60s.
  • Fixed the issue that scale-out is still triggered after the scale-out rule is disabled.

1.23.0

1.23.21

v1.23

  • Supported anti-affinity scheduling of add-on pods on nodes in different AZs.
  • Added the tolerance time during which the pods with temporary storage volumes cannot be scheduled.
  • Fixed the issue that the number of node pools cannot be restored when scaling group resources are insufficient.

1.23.0

1.23.17

v1.23

  • Supported NPUs and security containers.
  • Supported node scaling policies without a step.
  • Fixed a bug so that deleted node pools are automatically removed.
  • Supported scheduling by priority.
  • Supported the emptyDir scheduling policy.
  • Fixed a bug so that scale-in can be triggered on the nodes whose capacity is lower than the scale-in threshold when the node scaling policy is disabled.
  • Modified the memory request and limit of a customized flavor.
  • Enabled to report an event indicating that scaling cannot be performed in a node pool with auto scaling disabled.
  • Fixed the bug that NPU node scale-out is triggered again during scale-out.

1.23.0

1.23.10

v1.23

  • Optimized logging.
  • Supported scale-in waiting so that operations such as data dump can be performed before a node is deleted.

1.23.0

1.23.9

v1.23

Added the nodenetworkconfigs.crd.yangtse.cni resource object permission.

1.23.0

1.23.8

v1.23

Fixed the issue that scale-out fails when the number of nodes to be added at a time exceeds the upper limit in periodic scale-outs.

1.23.0

1.23.7

v1.23

-

1.23.0

1.23.3

v1.23

CCE clusters v1.23 are supported.

1.23.0

Table 8 Release history for the add-on adapted to clusters v1.21

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.21.114

v1.21

Optimized the logic for generating alarms when resources in a bode pool are sold out.

1.21.0

1.21.89

v1.21

Fixed some issues.

1.21.0

1.21.87

v1.21

Fixed some issues.

1.21.0

1.21.86

v1.21

Fixed the issue that the node pool auto scaling cannot meet expectations after AZ topology constraints are configured for nodes.

1.21.0

1.21.51

v1.21

Fixed the scale-in failure of nodes of different specifications in the same node pool and unexpected PreferNoSchedule taint issues.

1.21.0

1.21.43

v1.21

  • Optimized the method of identifying GPUs and NPUs.
  • Used the remaining node quota of a cluster for the extra nodes that are beyond the cluster scale.

1.21.0

1.21.29

v1.21

  • Supported anti-affinity scheduling of add-on pods on nodes in different AZs.
  • Added the tolerance time during which the pods with temporary storage volumes cannot be scheduled.
  • Fixed the issue that the number of node pools cannot be restored when scaling group resources are insufficient.
  • Fixed the issue that the node pool cannot be switched to another pool for scaling out after a scale-out failure and the add-on has to restart.
  • The default taint tolerance duration is changed to 60s.
  • Fixed the issue that scale-out is still triggered after the scale-out rule is disabled.

1.21.0

1.21.20

v1.21

  • Supported anti-affinity scheduling of add-on pods on nodes in different AZs.
  • Added the tolerance time during which the pods with temporary storage volumes cannot be scheduled.
  • Fixed the issue that the number of node pools cannot be restored when scaling group resources are insufficient.

1.21.0

1.21.16

v1.21

  • Supported NPUs and security containers.
  • Supported node scaling policies without a step.
  • Fixed a bug so that deleted node pools are automatically removed.
  • Supported scheduling by priority.
  • Supported the emptyDir scheduling policy.
  • Fixed a bug so that scale-in can be triggered on the nodes whose capacity is lower than the scale-in threshold when the node scaling policy is disabled.
  • Modified the memory request and limit of a customized flavor.
  • Enabled to report an event indicating that scaling cannot be performed in a node pool with auto scaling disabled.
  • Fixed the bug that NPU node scale-out is triggered again during scale-out.

1.21.0

1.21.9

v1.21

  • Optimized logging.
  • Supported scale-in waiting so that operations such as data dump can be performed before a node is deleted.

1.21.0

1.21.8

v1.21

Added the nodenetworkconfigs.crd.yangtse.cni resource object permission.

1.21.0

1.21.6

v1.21

Fixed the issue that authentication fails due to incorrect signature in the add-on request retries.

1.21.0

1.21.4

v1.21

Fixed the issue that authentication fails due to incorrect signature in the add-on request retries.

1.21.0

1.21.2

v1.21

Fixed the issue that auto scaling may be blocked due to a failure in deleting an unregistered node.

1.21.0

1.21.1

v1.21

Fixed the issue that the node pool modification in the existing periodic auto scaling rule does not take effect.

1.21.0

Table 9 Release history for the add-on adapted to clusters v1.19

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.19.76

v1.19

  • Optimized the method of identifying GPUs and NPUs.
  • Used the remaining node quota of a cluster for the extra nodes that are beyond the cluster scale.

1.19.0

1.19.56

v1.19

Fixed the scale-in failure of nodes of different specifications in the same node pool and unexpected PreferNoSchedule taint issues.

1.19.0

1.19.48

v1.19

  • Optimized the method of identifying GPUs and NPUs.
  • Used the remaining node quota of a cluster for the extra nodes that are beyond the cluster scale.

1.19.0

1.19.35

v1.19

  • Supported anti-affinity scheduling of add-on pods on nodes in different AZs.
  • Added the tolerance time during which the pods with temporary storage volumes cannot be scheduled.
  • Fixed the issue that the number of node pools cannot be restored when scaling group resources are insufficient.
  • Fixed the issue that the node pool cannot be switched to another pool for scaling out after a scale-out failure and the add-on has to restart.
  • The default taint tolerance duration is changed to 60s.
  • Fixed the issue that scale-out is still triggered after the scale-out rule is disabled.

1.19.0

1.19.27

v1.19

  • Supported anti-affinity scheduling of add-on pods on nodes in different AZs.
  • Added the tolerance time during which the pods with temporary storage volumes cannot be scheduled.
  • Fixed the issue that the number of node pools cannot be restored when scaling group resources are insufficient.

1.19.0

1.19.22

v1.19

  • Supported NPUs and security containers.
  • Supported node scaling policies without a step.
  • Fixed a bug so that deleted node pools are automatically removed.
  • Supported scheduling by priority.
  • Supported the emptyDir scheduling policy.
  • Fixed a bug so that scale-in can be triggered on the nodes whose capacity is lower than the scale-in threshold when the node scaling policy is disabled.
  • Modified the memory request and limit of a customized flavor.
  • Enabled to report an event indicating that scaling cannot be performed in a node pool with auto scaling disabled.
  • Fixed the bug that NPU node scale-out is triggered again during scale-out.

1.19.0

1.19.14

v1.19

  • Optimized logging.
  • Supported scale-in waiting so that operations such as data dump can be performed before a node is deleted.

1.19.0

1.19.13

v1.19

Fixed the issue that scale-out fails when the number of nodes to be added at a time exceeds the upper limit in periodic scale-outs.

1.19.0

1.19.12

v1.19

Fixed the issue that authentication fails due to incorrect signature in the add-on request retries.

1.19.0

1.19.11

v1.19

Fixed the issue that authentication fails due to incorrect signature in the add-on request retries.

1.19.0

1.19.9

v1.19

Fixed the issue that auto scaling may be blocked due to a failure in deleting an unregistered node.

1.19.0

1.19.8

v1.19

Fixed the issue that the node pool modification in the existing periodic auto scaling rule does not take effect.

1.19.0

1.19.7

v1.19

Regular upgrade of add-on dependencies

1.19.0

1.19.6

v1.19

Fixed the issue that repeated scale-out is triggered when taints are asynchronously updated.

1.19.0

1.19.3

v1.19

Supports scheduled scaling policies based on the total number of nodes, CPU limit, and memory limit. Fixes other functional defects.

1.19.0

Table 10 Release history for the add-on adapted to clusters v1.17

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.17.27

v1.17

  • Optimized logging.
  • Fixed a bug so that deleted node pools are automatically removed.
  • Supported scheduling by priority.
  • Fixed the issue that taints on newly added nodes are overwritten.
  • Fixed a bug so that scale-in can be triggered on the nodes whose capacity is lower than the scale-in threshold when the node scaling policy is disabled.
  • Modified the memory request and limit of a customized flavor.
  • Enabled to report an event indicating that scaling cannot be performed in a node pool with auto scaling disabled.

1.17.0

1.17.22

v1.17

Optimized logging.

1.17.0

1.17.21

v1.17

Fixed the issue that scale-out fails when the number of nodes to be added at a time exceeds the upper limit in periodic scale-outs.

1.17.0

1.17.19

v1.17

Fixed the issue that authentication fails due to incorrect signature in the add-on request retries.

1.17.0

1.17.17

v1.17

Fixed the issue that auto scaling may be blocked due to a failure in deleting an unregistered node.

1.17.0

1.17.16

v1.17

Fixed the issue that the node pool modification in the existing periodic auto scaling rule does not take effect.

1.17.0

1.17.15

v1.17

Unified resource specification configuration unit.

1.17.0

1.17.14

v1.17

Fixed the issue that repeated scale-out is triggered when taints are asynchronously updated.

1.17.0

1.17.8

v1.17

Fixed bugs.

1.17.0

1.17.7

v1.17

Added log content and fixed bugs.

1.17.0

1.17.5

v1.17

Supported clusters v1.17 and allowed scaling events to be displayed on the CCE console.

1.17.0

1.17.2

v1.17

Clusters v1.17 are supported.

1.17.0

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback