Updated on 2024-06-17 GMT+08:00

Workload Auto Scaling (HPA)

Horizontal Pod Autoscaling (HPA) in Kubernetes implements horizontal scaling of pods. In a CCE HPA policy, you can configure different cooldown period windows and scaling thresholds for different applications.

Prerequisites

To use HPA, install either of the following add-ons that support metrics APIs: (For details, see Support for metrics APIs.)

  • metrics-server: collects metrics from the Summary API exposed by kubelet and provides resource usage metrics such as the container CPU and memory usage
    • For details about how to install metrics-server for an on-premises cluster, see metrics-server.
    • For details about how to install metrics-server for other types of clusters, see the official documentation. For an attached cluster, you can also install metric-server provided by the corresponding vendor.
  • Prometheus: an open source monitoring and alarming framework that collects metrics and provides basic resource metrics and custom metrics

Constraints

  • At least one pod is available in the cluster. If no pod is available, pod scale-out will be performed.
  • If no metric collection add-on has been installed in the cluster, the workload scaling policy cannot take effect.
  • metrics-server can only be installed for on-premises clusters for calling the Metrics API. More add-ons will be available in the future.

Procedure

  1. Access the cluster details page.

    • If the cluster is not added to any fleet, click the cluster name.
    • If the cluster has been added to a fleet, click the fleet name. In the navigation pane, choose Clusters > Container Clusters.

  2. In the navigation pane, choose Workload Scaling. Then click Create HPA Policy in the upper right corner.
  3. Configure the parameters for the HPA policy.

    Table 1 HPA policy parameters

    Parameter

    Description

    Policy Name

    Enter a name for the policy.

    Namespace

    Select the namespace that the workload belongs to.

    Associated Workload

    Select the workload that the HPA policy is associated with.

    Pod Range

    Enter minimum and maximum numbers of pods.

    When the policy is triggered, the workload pods are scaled within this range.

    System Policy

    • Metric: Select CPU usage or Memory usage.
    NOTE:

    Usage = CPU or memory used by pods/Requested CPU or memory

    • Desired Value: Enter the desired average resource usage.

    This parameter indicates the desired value of the selected metric.

    Number of new pods required (rounded up) = Current metric value/Desired value x Number of current pods

    NOTE:

    When calculating the number of pods to be added or reduced, the HPA policy uses the maximum number of pods in the last 5 minutes.

    • Tolerance Range: The default tolerance is 0.1. Enter the scale-in and scale-out thresholds. The desired metric value must be within this tolerance range.

      If the metric value is greater than the scale-in threshold and less than the scale-out threshold, no scaling operation will be triggered. This parameter is available only in clusters of v1.15 or later.

    NOTICE:

    You can configure multiple system policies.