Updated on 2024-11-12 GMT+08:00

Creating an HPA Policy

Horizontal Pod Autoscaling (HPA) in Kubernetes implements horizontal scaling of pods. In a CCE HPA policy, you can configure different cooldown time windows and scaling thresholds for different applications based on the Kubernetes HPA.

Prerequisites

To use HPA, install an add-on that provides metrics APIs. Select one of the following add-ons based on your cluster version and service requirements.

Notes and Constraints

  • HPA policies can be created only for clusters of v1.13 or later.
  • For clusters earlier than v1.19.10, if an HPA policy is used to scale out a workload with EVS volumes mounted, the existing pods cannot be read or written when a new pod is scheduled to another node.

    For clusters of v1.19.10 and later, if an HPA policy is used to scale out a workload with EVS volumes mounted, a new pod cannot be started because EVS disks cannot be attached.

Procedure

  1. Log in to the CCE console and click the cluster name to access the cluster console.
  2. Choose Workloads in the navigation pane. Locate the target workload and choose More > Auto Scaling in the Operation column.

    Figure 1 Scaling a workload

  3. Set Policy Type to HPA+CronHPA, enable the created HPA policy, and configure parameters.

    This section describes only HPA policies. To enable CronHPA, see Creating a Scheduled CronHPA Policy.

    Figure 2 Enabling the HPA policy
    Table 1 HPA policy

    Parameter

    Description

    Pod Range

    Minimum and maximum numbers of pods.

    When a policy is triggered, the workload pods are scaled within this range.

    Cooldown Period

    Interval between a scale-in and a scale-out. The unit is minute. The interval cannot be shorter than 1 minute.

    This parameter is supported only in clusters of v1.15 to v1.23.

    This parameter indicates the interval between consecutive scaling operations. The cooldown period ensures that a scaling operation is initiated only when the previous one is completed and the system is running stably.

    Scaling Behavior

    This parameter is supported only in clusters of v1.25 or later.

    • Default: scales workloads using the Kubernetes default behavior. For details, see Default Behavior.
    • Custom: scales workloads using custom policies such as stabilization window, steps, and priorities. Unspecified parameters use the values recommended by Kubernetes.
      • Disable scale-out/scale-in: Select whether to disable scale-out or scale-in.
      • Stabilization Window: a period during which CCE continuously checks whether the metrics used for scaling keep fluctuating. CCE triggers scaling if the desired state is not maintained for the entire window. This window restricts the unwanted flapping of pod count due to metric changes.
      • Step: specifies the scaling step. You can set the number or percentage of pods to be scaled in or out within a specified period. If there are multiple policies, you can select the policy that maximizes or minimizes the number of pods.

    System Policy

    • Metric: You can select CPU usage or Memory usage.
      NOTE:

      Usage = Average resource usage of all pods in a workload/Requested resources

    • Desired Value: Enter the desired average resource usage.

      This parameter indicates the desired value of the selected metric. Number of target pods (rounded up) = (Current metric value/Desired value) x Number of current pods

      NOTE:

      When calculating the number of pods to be added or reduced, the HPA policy uses the maximum number of pods in the last 5 minutes.

    • Tolerance Range: Scaling is not triggered when the metric value is within the tolerance range. The desired value must be within the tolerance range.

      If the metric value is greater than the scale-in threshold and less than the scale-out threshold, no scaling is triggered. This parameter is supported only in clusters of v1.15 or later.

    Custom Policy (supported only in clusters of v1.15 or later)

    NOTE:

    Before creating a custom policy, install an add-on that supports custom metric collection (for example, Prometheus) in the cluster. Ensure that the add-on can collect and report the custom metrics of the workloads.

    For details, see Monitoring Custom Metrics Using Cloud Native Cluster Monitoring.

    • Metric Name: name of the custom metric. You can select a name as prompted.
    • Metric Source: Select an object type from the drop-down list. You can select Pod.
    • Desired Value: the average metric value of all pods. Number of pods to be scaled (rounded up) = (Current metric value/Desired value) x Number of current pods
      NOTE:

      When calculating the number of pods to be added or reduced, the HPA policy uses the maximum number of pods in the last 5 minutes.

    • Tolerance Range: Scaling is not triggered when the metric value is within the tolerance range. The desired value must be within the tolerance range.

  4. Click Create.