Updated on 2023-11-15 GMT+08:00

autoscaler Release History

Table 1 Updates of autoscaler adapted to clusters v1.25

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.25.21

v1.25

  • Fixes the issue that the autoscaler's least-waste is disabled by default.
  • Fixes the issue that the node pool cannot be switched to another pool for scaling out after a scale-out failure and the add-on has to restart.
  • The default taint tolerance duration is changed to 60s.
  • Fixes the issue that scale-out is still triggered after the scale-out rule is disabled.

1.25.0

1.25.11

v1.25

  • Supports anti-affinity scheduling of pods on nodes in different AZs.
  • Adds the tolerance time during which the pods with temporary storage volumes cannot be scheduled.
  • Fixes the issue that the number of node pools cannot be restored when AS group resources are insufficient.

1.25.0

1.25.7

v1.25

  • Adapts to CCE clusters of version 1.25.
  • Modifies the memory request and limit of a customized flavor.
  • Enables to report an event indicating that scaling cannot be performed in a node pool with auto scaling disabled.
  • Fixes the bug that NPU node scale-out is triggered again during scale-out.

1.25.0

Table 2 Updates of autoscaler adapted to clusters v1.23

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.23.44

v1.23

  • Optimizes the method of identifying GPUs and NPUs.
  • Uses the remaining node quota of a cluster for the extra nodes that are beyond the cluster scale.

1.23.0

1.23.31

v1.23

  • Fixes the issue that the autoscaler's least-waste is disabled by default.
  • Fixes the issue that the node pool cannot be switched to another pool for scaling out after a scale-out failure and the add-on has to restart.
  • The default taint tolerance duration is changed to 60s.
  • Fixes the issue that scale-out is still triggered after the scale-out rule is disabled.

1.23.0

1.23.21

v1.23

  • Supports anti-affinity scheduling of pods on nodes in different AZs.
  • Adds the tolerance time during which the pods with temporary storage volumes cannot be scheduled.
  • Fixes the issue that the number of node pools cannot be restored when AS group resources are insufficient.

1.23.0

1.23.17

v1.23

  • Supports NPUs and security containers.
  • Supports node scaling policies without a step.
  • Fixes a bug so that deleted node pools are automatically removed.
  • Supports scheduling by priority.
  • Supports the emptydir scheduling policy.
  • Fixes a bug so that scale-in can be triggered on the nodes whose capacity is lower than the scale-in threshold when the node scaling policy is disabled.
  • Modifies the memory request and limit of a customized flavor.
  • Enables to report an event indicating that scaling cannot be performed in a node pool with auto scaling disabled.
  • Fixes the bug that NPU node scale-out is triggered again during scale-out.

1.23.0

1.23.10

v1.23

  • Optimizes logging.
  • Supports scale-in waiting so that operations such as data dump can be performed before a node is deleted.

1.23.0

1.23.9

v1.23

  • Adds the nodenetworkconfigs.crd.yangtse.cni resource object permission.

1.23.0

1.23.8

v1.23

  • Fixes the issue that scale-out fails when the number of nodes to be added at a time exceeds the upper limit in periodic scale-outs.

1.23.0

1.23.7

v1.23

1.23.0

1.23.3

v1.23

  • Adapts to CCE clusters of version 1.23.

1.23.0

Table 3 Updates of autoscaler adapted to clusters v1.21

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.21.43

v1.21

  • Optimizes the method of identifying GPUs and NPUs.
  • Uses the remaining node quota of a cluster for the extra nodes that are beyond the cluster scale.

1.21.0

1.21.29

v1.21

  • Supports anti-affinity scheduling of pods on nodes in different AZs.
  • Adds the tolerance time during which the pods with temporary storage volumes cannot be scheduled.
  • Fixes the issue that the number of node pools cannot be restored when AS group resources are insufficient.
  • Fixes the issue that the node pool cannot be switched to another pool for scaling out after a scale-out failure and the add-on has to restart.
  • The default taint tolerance duration is changed to 60s.
  • Fixes the issue that scale-out is still triggered after the scale-out rule is disabled.

1.21.0

1.21.20

v1.21

  • Supports anti-affinity scheduling of pods on nodes in different AZs.
  • Adds the tolerance time during which the pods with temporary storage volumes cannot be scheduled.
  • Fixes the issue that the number of node pools cannot be restored when AS group resources are insufficient.

1.21.0

1.21.16

v1.21

  • Supports NPUs and security containers.
  • Supports node scaling policies without a step.
  • Fixes a bug so that deleted node pools are automatically removed.
  • Supports scheduling by priority.
  • Supports the emptydir scheduling policy.
  • Fixes a bug so that scale-in can be triggered on the nodes whose capacity is lower than the scale-in threshold when the node scaling policy is disabled.
  • Modifies the memory request and limit of a customized flavor.
  • Enables to report an event indicating that scaling cannot be performed in a node pool with auto scaling disabled.
  • Fixes the bug that NPU node scale-out is triggered again during scale-out.

1.21.0

1.21.9

v1.21

  • Optimizes logging.
  • Supports scale-in waiting so that operations such as data dump can be performed before a node is deleted.

1.21.0

1.21.8

v1.21

  • Adds the nodenetworkconfigs.crd.yangtse.cni resource object permission.

1.21.0

1.21.6

v1.21

  • Fixes the issue that authentication fails due to incorrect signature in the add-on request retries.

1.21.0

1.21.4

v1.21

  • Fixes the issue that authentication fails due to incorrect signature in the add-on request retries.

1.21.0

1.21.2

v1.21

  • Fixes the issue that auto scaling may be blocked due to a failure in deleting an unregistered node.

1.21.0

1.21.1

v1.21

  • Fixes the issue that the node pool modification in the existing periodic auto scaling rule does not take effect.

1.21.0

Table 4 Updates of autoscaler adapted to clusters v1.19

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.19.48

v1.19

  • Optimizes the method of identifying GPUs and NPUs.
  • Uses the remaining node quota of a cluster for the extra nodes that are beyond the cluster scale.

1.19.0

1.19.35

v1.19

  • Supports anti-affinity scheduling of pods on nodes in different AZs.
  • Adds the tolerance time during which the pods with temporary storage volumes cannot be scheduled.
  • Fixes the issue that the number of node pools cannot be restored when AS group resources are insufficient.
  • Fixes the issue that the node pool cannot be switched to another pool for scaling out after a scale-out failure and the add-on has to restart.
  • The default taint tolerance duration is changed to 60s.
  • Fixes the issue that scale-out is still triggered after the scale-out rule is disabled.

1.19.0

1.19.27

v1.19

  • Supports anti-affinity scheduling of pods on nodes in different AZs.
  • Adds the tolerance time during which the pods with temporary storage volumes cannot be scheduled.
  • Fixes the issue that the number of node pools cannot be restored when AS group resources are insufficient.

1.19.0

1.19.22

v1.19

  • Supports NPUs and security containers.
  • Supports node scaling policies without a step.
  • Fixes a bug so that deleted node pools are automatically removed.
  • Supports scheduling by priority.
  • Supports the emptydir scheduling policy.
  • Fixes a bug so that scale-in can be triggered on the nodes whose capacity is lower than the scale-in threshold when the node scaling policy is disabled.
  • Modifies the memory request and limit of a customized flavor.
  • Enables to report an event indicating that scaling cannot be performed in a node pool with auto scaling disabled.
  • Fixes the bug that NPU node scale-out is triggered again during scale-out.

1.19.0

1.19.14

v1.19

  • Optimizes logging.
  • Supports scale-in waiting so that operations such as data dump can be performed before a node is deleted.

1.19.0

1.19.13

v1.19

  • Fixes the issue that scale-out fails when the number of nodes to be added at a time exceeds the upper limit in periodic scale-outs.

1.19.0

1.19.12

v1.19

  • Fixes the issue that authentication fails due to incorrect signature in the add-on request retries.

1.19.0

1.19.11

v1.19

  • Fixes the issue that authentication fails due to incorrect signature in the add-on request retries.

1.19.0

1.19.9

v1.19

  • Fixes the issue that auto scaling may be blocked due to a failure in deleting an unregistered node.

1.19.0

1.19.8

v1.19

  • Fixes the issue that the node pool modification in the existing periodic auto scaling rule does not take effect.

1.19.0

1.19.7

v1.19

  • Regular upgrade of add-on dependencies

1.19.0

1.19.6

v1.19

  • Fixes the issue that repeated scale-out is triggered when taints are asynchronously updated.

1.19.0

1.19.3

v1.19

  • Supports scheduled scaling policies based on the total number of nodes, CPU limit, and memory limit. Fixes other functional defects.

1.19.0

Table 5 Updates of autoscaler adapted to clusters v1.17

Add-on Version

Supported Cluster Version

New Feature

Community Version

1.17.27

v1.17

  • Optimizes logging.
  • Fixes a bug so that deleted node pools are automatically removed.
  • Supports scheduling by priority.
  • Fixes the issue that taints on newly added nodes are overwritten.
  • Fixes a bug so that scale-in can be triggered on the nodes whose capacity is lower than the scale-in threshold when the node scaling policy is disabled.
  • Modifies the memory request and limit of a customized flavor.
  • Enables to report an event indicating that scaling cannot be performed in a node pool with auto scaling disabled.

1.17.0

1.17.22

v1.17

  • Optimizes logging.

1.17.0

1.17.21

v1.17

  • Fixes the issue that scale-out fails when the number of nodes to be added at a time exceeds the upper limit in periodic scale-outs.

1.17.0

1.17.19

v1.17

  • Fixes the issue that authentication fails due to incorrect signature in the add-on request retries.

1.17.0

1.17.17

v1.17

  • Fixes the issue that auto scaling may be blocked due to a failure in deleting an unregistered node.

1.17.0

1.17.16

v1.17

  • Fixes the issue that the node pool modification in the existing periodic auto scaling rule does not take effect.

1.17.0

1.17.15

v1.17

  • Unifies resource specification configuration unit.

1.17.0

1.17.14

v1.17

  • Fixes the issue that repeated scale-out is triggered when taints are asynchronously updated.

1.17.0

1.17.8

v1.17

  • Fixes bugs.

1.17.0

1.17.7

v1.17

  • Adds log content and fixes bugs.

1.17.0

1.17.5

v1.17

  • Supports clusters of version 1.17, and allows scaling events to be displayed on the CCE console.

1.17.0

1.17.2

v1.17

  • Supports clusters of version 1.17.

1.17.0