Updated on 2025-02-22 GMT+08:00

CCE Cluster Autoscaler Release History

Table 1 Release history for the add-on adapted to clusters v1.31

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.31.8

v1.31

CCE clusters v1.31 are supported.

1.31.1

Table 2 Release history for the add-on adapted to clusters v1.30

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.30.46

v1.30

Fixed some issues.

1.30.1

1.30.19

v1.30

Fixed some issues.

1.30.1

1.30.18

v1.30

Fixed some issues.

1.30.1

1.30.15

v1.30

  • Clusters v1.30 are supported.
  • Added the name of the target node pool to the events.

1.30.1

Table 3 Release history for the add-on adapted to clusters v1.29

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.29.79

v1.29

Fixed some issues.

1.29.1

1.29.54

v1.29

Fixed some issues.

1.29.1

1.29.53

v1.29

Fixed some issues.

1.29.1

1.29.50

v1.29

Added the name of the target node pool to the events.

1.29.1

1.29.17

v1.29

Optimized events.

1.29.1

1.29.13

v1.29

Clusters v1.29 are supported.

1.29.1

Table 4 Release history for the add-on adapted to clusters v1.28

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.28.118

v1.28

Fixed some issues.

1.28.1

1.28.93

v1.28

Fixed some issues.

1.28.1

1.28.92

v1.28

Fixed some issues.

1.28.1

1.28.91

v1.28

Fixed some issues.

1.28.1

1.28.88

v1.28

Added the name of the target node pool to the events.

1.28.1

1.28.55

v1.28

Optimized events.

1.28.1

1.28.51

v1.28

Optimized the logic for generating alarms when resources in a bode pool are sold out.

1.28.1

1.28.22

v1.28

Fixed some issues.

1.28.1

1.28.20

v1.28

Fixed some issues.

1.28.1

1.28.17

v1.28

Fixed the issue that scale-in cannot be performed when there are custom pod controllers in a cluster.

1.28.1

Table 5 Release history for the add-on adapted to clusters v1.27

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.27.149

v1.27

Fixed some issues.

1.27.1

1.27.124

v1.27

Fixed some issues.

1.27.1

1.27.123

v1.27

Fixed some issues.

1.27.1

1.27.122

v1.27

Fixed some issues.

1.27.1

1.27.119

v1.27

Added the name of the target node pool to the events.

1.27.1

1.27.88

v1.27

Optimized events.

1.27.1

1.27.84

v1.27

Optimized the logic for generating alarms when resources in a bode pool are sold out.

1.27.1

1.27.55

v1.27

Fixed some issues.

1.27.1

1.27.53

v1.27

Fixed some issues.

1.27.1

1.27.51

v1.27

Fixed some issues.

1.27.1

1.27.14

v1.27

Fixed the scale-in failure of nodes of different specifications in the same node pool and unexpected PreferNoSchedule taint issues.

1.27.1

Table 6 Release history for the add-on adapted to clusters v1.25

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.25.179

v1.25

Fixed some issues.

1.25.0

1.25.154

v1.25

Fixed some issues.

1.25.0

1.25.153

v1.25

Fixed some issues.

1.25.0

1.25.152

v1.25

Added the name of the target node pool to the events.

1.25.0

1.25.120

v1.25

Optimized events.

1.25.0

1.25.116

v1.25

Optimized the logic for generating alarms when resources in a bode pool are sold out.

1.25.0

1.25.88

v1.25

Fixed some issues.

1.25.0

1.25.86

v1.25

Fixed some issues.

1.25.0

1.25.84

v1.25

Fixed some issues.

1.25.0

1.25.46

v1.25

Fixed the scale-in failure of nodes of different specifications in the same node pool and unexpected PreferNoSchedule taint issues.

1.25.0

1.25.34

v1.25

  • Optimized the method of identifying GPUs and NPUs.
  • Used the remaining node quota of a cluster for the extra nodes that are beyond the cluster scale.

1.25.0

1.25.21

v1.25

  • Fixed the issue that the autoscaler's least-waste is disabled by default.
  • Fixed the issue that the node pool cannot be switched to another pool for scaling out after a scale-out failure and the add-on has to restart.
  • The default taint tolerance duration is changed to 60s.
  • Fixed the issue that scale-out is still triggered after the scale-out rule is disabled.

1.25.0

1.25.11

v1.25

  • Supported anti-affinity scheduling of add-on pods on nodes in different AZs.
  • Added the tolerance time during which the pods with temporary storage volumes cannot be scheduled.
  • Fixed the issue that the number of node pools cannot be restored when scaling group resources are insufficient.

1.25.0

1.25.7

v1.25

  • CCE clusters v1.25 are supported.
  • Modified the memory request and limit of a customized flavor.
  • Enabled to report an event indicating that scaling cannot be performed in a node pool with auto scaling disabled.
  • Fixed the bug that NPU node scale-out is triggered again during scale-out.

1.25.0

Table 7 Release history for the add-on adapted to clusters v1.23

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.23.159

v1.23

Fixed some issues.

1.23.0

1.23.157

v1.23

Fixed some issues.

1.23.0

1.23.156

v1.23

Added the name of the target node pool to the events.

1.23.0

1.23.125

v1.23

Optimized events.

1.23.0

1.23.121

v1.23

Optimized the logic for generating alarms when resources in a bode pool are sold out.

1.23.0

1.23.95

v1.23

Fixed some issues.

1.23.0

1.23.93

v1.23

Fixed some issues.

1.23.0

1.23.91

v1.23

Fixed some issues.

1.23.0

1.23.54

v1.23

Fixed the scale-in failure of nodes of different specifications in the same node pool and unexpected PreferNoSchedule taint issues.

1.23.0

1.23.44

v1.23

  • Optimized the method of identifying GPUs and NPUs.
  • Used the remaining node quota of a cluster for the extra nodes that are beyond the cluster scale.

1.23.0

1.23.31

v1.23

  • Fixed the issue that the autoscaler's least-waste is disabled by default.
  • Fixed the issue that the node pool cannot be switched to another pool for scaling out after a scale-out failure and the add-on has to restart.
  • The default taint tolerance duration is changed to 60s.
  • Fixed the issue that scale-out is still triggered after the scale-out rule is disabled.

1.23.0

1.23.21

v1.23

  • Supported anti-affinity scheduling of add-on pods on nodes in different AZs.
  • Added the tolerance time during which the pods with temporary storage volumes cannot be scheduled.
  • Fixed the issue that the number of node pools cannot be restored when scaling group resources are insufficient.

1.23.0

1.23.17

v1.23

  • Supported NPUs and security containers.
  • Supported node scaling policies without a step.
  • Fixed a bug so that deleted node pools are automatically removed.
  • Supported scheduling by priority.
  • Supported the emptyDir scheduling policy.
  • Fixed a bug so that scale-in can be triggered on the nodes whose capacity is lower than the scale-in threshold when the node scaling policy is disabled.
  • Modified the memory request and limit of a customized flavor.
  • Enabled to report an event indicating that scaling cannot be performed in a node pool with auto scaling disabled.
  • Fixed the bug that NPU node scale-out is triggered again during scale-out.

1.23.0

1.23.10

v1.23

  • Optimized logging.
  • Supported scale-in waiting so that operations such as data dump can be performed before a node is deleted.

1.23.0

1.23.9

v1.23

Added the nodenetworkconfigs.crd.yangtse.cni resource object permission.

1.23.0

1.23.8

v1.23

Fixed the issue that scale-out fails when the number of nodes to be added at a time exceeds the upper limit in periodic scale-outs.

1.23.0

1.23.7

v1.23

-

1.23.0

1.23.3

v1.23

CCE clusters v1.23 are supported.

1.23.0

Table 8 Release history for the add-on adapted to clusters v1.21

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.21.114

v1.21

Optimized the logic for generating alarms when resources in a bode pool are sold out.

1.21.0

1.21.89

v1.21

Fixed some issues.

1.21.0

1.21.87

v1.21

Fixed some issues.

1.21.0

1.21.86

v1.21

Fixed the issue that the node pool auto scaling cannot meet expectations after AZ topology constraints are configured for nodes.

1.21.0

1.21.51

v1.21

Fixed the scale-in failure of nodes of different specifications in the same node pool and unexpected PreferNoSchedule taint issues.

1.21.0

1.21.43

v1.21

  • Optimized the method of identifying GPUs and NPUs.
  • Used the remaining node quota of a cluster for the extra nodes that are beyond the cluster scale.

1.21.0

1.21.29

v1.21

  • Supported anti-affinity scheduling of add-on pods on nodes in different AZs.
  • Added the tolerance time during which the pods with temporary storage volumes cannot be scheduled.
  • Fixed the issue that the number of node pools cannot be restored when scaling group resources are insufficient.
  • Fixed the issue that the node pool cannot be switched to another pool for scaling out after a scale-out failure and the add-on has to restart.
  • The default taint tolerance duration is changed to 60s.
  • Fixed the issue that scale-out is still triggered after the scale-out rule is disabled.

1.21.0

1.21.20

v1.21

  • Supported anti-affinity scheduling of add-on pods on nodes in different AZs.
  • Added the tolerance time during which the pods with temporary storage volumes cannot be scheduled.
  • Fixed the issue that the number of node pools cannot be restored when scaling group resources are insufficient.

1.21.0

1.21.16

v1.21

  • Supported NPUs and security containers.
  • Supported node scaling policies without a step.
  • Fixed a bug so that deleted node pools are automatically removed.
  • Supported scheduling by priority.
  • Supported the emptyDir scheduling policy.
  • Fixed a bug so that scale-in can be triggered on the nodes whose capacity is lower than the scale-in threshold when the node scaling policy is disabled.
  • Modified the memory request and limit of a customized flavor.
  • Enabled to report an event indicating that scaling cannot be performed in a node pool with auto scaling disabled.
  • Fixed the bug that NPU node scale-out is triggered again during scale-out.

1.21.0

1.21.9

v1.21

  • Optimized logging.
  • Supported scale-in waiting so that operations such as data dump can be performed before a node is deleted.

1.21.0

1.21.8

v1.21

Added the nodenetworkconfigs.crd.yangtse.cni resource object permission.

1.21.0

1.21.6

v1.21

Fixed the issue that authentication fails due to incorrect signature in the add-on request retries.

1.21.0

1.21.4

v1.21

Fixed the issue that authentication fails due to incorrect signature in the add-on request retries.

1.21.0

1.21.2

v1.21

Fixed the issue that auto scaling may be blocked due to a failure in deleting an unregistered node.

1.21.0

1.21.1

v1.21

Fixed the issue that the node pool modification in the existing periodic auto scaling rule does not take effect.

1.21.0

Table 9 Release history for the add-on adapted to clusters v1.19

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.19.76

v1.19

  • Optimized the method of identifying GPUs and NPUs.
  • Used the remaining node quota of a cluster for the extra nodes that are beyond the cluster scale.

1.19.0

1.19.56

v1.19

Fixed the scale-in failure of nodes of different specifications in the same node pool and unexpected PreferNoSchedule taint issues.

1.19.0

1.19.48

v1.19

  • Optimized the method of identifying GPUs and NPUs.
  • Used the remaining node quota of a cluster for the extra nodes that are beyond the cluster scale.

1.19.0

1.19.35

v1.19

  • Supported anti-affinity scheduling of add-on pods on nodes in different AZs.
  • Added the tolerance time during which the pods with temporary storage volumes cannot be scheduled.
  • Fixed the issue that the number of node pools cannot be restored when scaling group resources are insufficient.
  • Fixed the issue that the node pool cannot be switched to another pool for scaling out after a scale-out failure and the add-on has to restart.
  • The default taint tolerance duration is changed to 60s.
  • Fixed the issue that scale-out is still triggered after the scale-out rule is disabled.

1.19.0

1.19.27

v1.19

  • Supported anti-affinity scheduling of add-on pods on nodes in different AZs.
  • Added the tolerance time during which the pods with temporary storage volumes cannot be scheduled.
  • Fixed the issue that the number of node pools cannot be restored when scaling group resources are insufficient.

1.19.0

1.19.22

v1.19

  • Supported NPUs and security containers.
  • Supported node scaling policies without a step.
  • Fixed a bug so that deleted node pools are automatically removed.
  • Supported scheduling by priority.
  • Supported the emptyDir scheduling policy.
  • Fixed a bug so that scale-in can be triggered on the nodes whose capacity is lower than the scale-in threshold when the node scaling policy is disabled.
  • Modified the memory request and limit of a customized flavor.
  • Enabled to report an event indicating that scaling cannot be performed in a node pool with auto scaling disabled.
  • Fixed the bug that NPU node scale-out is triggered again during scale-out.

1.19.0

1.19.14

v1.19

  • Optimized logging.
  • Supported scale-in waiting so that operations such as data dump can be performed before a node is deleted.

1.19.0

1.19.13

v1.19

Fixed the issue that scale-out fails when the number of nodes to be added at a time exceeds the upper limit in periodic scale-outs.

1.19.0

1.19.12

v1.19

Fixed the issue that authentication fails due to incorrect signature in the add-on request retries.

1.19.0

1.19.11

v1.19

Fixed the issue that authentication fails due to incorrect signature in the add-on request retries.

1.19.0

1.19.9

v1.19

Fixed the issue that auto scaling may be blocked due to a failure in deleting an unregistered node.

1.19.0

1.19.8

v1.19

Fixed the issue that the node pool modification in the existing periodic auto scaling rule does not take effect.

1.19.0

1.19.7

v1.19

Regular upgrade of add-on dependencies

1.19.0

1.19.6

v1.19

Fixed the issue that repeated scale-out is triggered when taints are asynchronously updated.

1.19.0

1.19.3

v1.19

Supports scheduled scaling policies based on the total number of nodes, CPU limit, and memory limit. Fixes other functional defects.

1.19.0

Table 10 Release history for the add-on adapted to clusters v1.17

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.17.27

v1.17

  • Optimized logging.
  • Fixed a bug so that deleted node pools are automatically removed.
  • Supported scheduling by priority.
  • Fixed the issue that taints on newly added nodes are overwritten.
  • Fixed a bug so that scale-in can be triggered on the nodes whose capacity is lower than the scale-in threshold when the node scaling policy is disabled.
  • Modified the memory request and limit of a customized flavor.
  • Enabled to report an event indicating that scaling cannot be performed in a node pool with auto scaling disabled.

1.17.0

1.17.22

v1.17

Optimized logging.

1.17.0

1.17.21

v1.17

Fixed the issue that scale-out fails when the number of nodes to be added at a time exceeds the upper limit in periodic scale-outs.

1.17.0

1.17.19

v1.17

Fixed the issue that authentication fails due to incorrect signature in the add-on request retries.

1.17.0

1.17.17

v1.17

Fixed the issue that auto scaling may be blocked due to a failure in deleting an unregistered node.

1.17.0

1.17.16

v1.17

Fixed the issue that the node pool modification in the existing periodic auto scaling rule does not take effect.

1.17.0

1.17.15

v1.17

Unified resource specification configuration unit.

1.17.0

1.17.14

v1.17

Fixed the issue that repeated scale-out is triggered when taints are asynchronously updated.

1.17.0

1.17.8

v1.17

Fixed bugs.

1.17.0

1.17.7

v1.17

Added log content and fixed bugs.

1.17.0

1.17.5

v1.17

Supported clusters v1.17 and allowed scaling events to be displayed on the CCE console.

1.17.0

1.17.2

v1.17

Clusters v1.17 are supported.

1.17.0