HPA Policies
Horizontal Pod Autoscaling (HPA) in Kubernetes implements horizontal scaling of pods. In a CCE HPA policy, you can configure different cooldown time windows and scaling thresholds for different applications based on the Kubernetes HPA.
Prerequisites
- Kubernetes Metrics Server: provides basic resource usage metrics, such as container CPU and memory usage. It is supported by all cluster versions.
- Cloud Native Cluster Monitoring: available only in clusters of v1.17 or later.
- Auto scaling based on basic resource metrics: Prometheus needs to be registered as a metrics API. For details, see Providing Resource Metrics Through the Metrics API.
- Auto scaling based on custom metrics: Custom metrics need to be aggregated to the Kubernetes API server. For details, see Creating an HPA Policy Using Custom Metrics.
- Prometheus: Prometheus needs to be registered as a metrics API. For details, see Providing Resource Metrics Through the Metrics API. This add-on supports only clusters of v1.21 or earlier.
Constraints
- HPA policies can be created only for clusters of v1.13 or later.
- For clusters earlier than v1.19.10, if an HPA policy is used to scale out a workload with EVS volumes mounted, the existing pods cannot be read or written when a new pod is scheduled to another node.
For clusters of v1.19.10 and later, if an HPA policy is used to scale out a workload with EVS volume mounted, a new pod cannot be started because EVS disks cannot be attached.
Creating an HPA Policy
- Log in to the CCE console and click the cluster name to access the cluster console.
- Choose Workloads in the navigation pane. Locate the target workload and choose More > Auto Scaling in the Operation column.
- Set Policy Type to HPA+CronHPA, enable the created HPA policy, and configure parameters.
This section describes only HPA policies. To enable CronHPA, see CronHPA Policies.
Table 1 HPA policy Parameter
Description
Pod Range
Minimum and maximum numbers of pods.
When a policy is triggered, the workload pods are scaled within this range.
Cooldown Period
Interval between a scale-in and a scale-out. The unit is minute. The interval cannot be shorter than 1 minute.
This parameter is supported only in clusters of v1.15 to v1.23.
This parameter indicates the interval between consecutive scaling operations. The cooldown period ensures that a scaling operation is initiated only when the previous one is completed and the system is running stably.
Scaling Behavior
This parameter is supported only in clusters of v1.25 or later.
- Default: scales workloads using the Kubernetes default behavior. For details, see Default Behavior.
- Custom: scales workloads using custom policies such as stabilization window, steps, and priorities. Unspecified parameters use the values recommended by Kubernetes.
- Disable scale-out/scale-in: Select whether to disable scale-out or scale-in.
- Stabilization Window: a period during which CCE continuously checks whether the metrics used for scaling keep fluctuating. CCE triggers scaling if the desired state is not maintained for the entire window. This window restricts the unwanted flapping of pod count due to metric changes.
- Step: specifies the scaling step. You can set the number or percentage of pods to be scaled in or out within a specified period. If there are multiple policies, you can select the policy that maximizes or minimizes the number of pods.
System Policy
- Metric: You can select CPU usage or Memory usage.
NOTE:
Usage = CPUs or memory used by pods/Requested CPUs or memory.
- Desired Value: Enter the desired average resource usage.
This parameter indicates the desired value of the selected metric. Number of pods to be scaled (rounded up) = (Current metric value/Desired value) x Number of current pods
NOTE:When calculating the number of pods to be added or reduced, the HPA policy uses the maximum number of pods in the last 5 minutes.
- Tolerance Range: Scaling is not triggered when the metric value is within the tolerance range. The desired value must be within the tolerance range.
If the metric value is greater than the scale-in threshold and less than the scale-out threshold, no scaling is triggered. This parameter is supported only in clusters of v1.15 or later.
Custom Policy (supported only in clusters of v1.15 or later)
NOTE:Before creating a custom policy, install an add-on that supports custom metric collection (for example, Prometheus) in the cluster. Ensure that the add-on can collect and report the custom metrics of the workloads.
For details, see Monitoring Custom Metrics Using Cloud Native Cluster Monitoring.
- Metric Name: name of the custom metric. You can select a name as prompted.
- Metric Source: Select an object type from the drop-down list. You can select Pod.
- Desired Value: the average metric value of all pods. Number of pods to be scaled (rounded up) = (Current metric value/Desired value) x Number of current pods
NOTE:
When calculating the number of pods to be added or reduced, the HPA policy uses the maximum number of pods in the last 5 minutes.
- Tolerance Range: Scaling is not triggered when the metric value is within the tolerance range. The desired value must be within the tolerance range.
- Click Create.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot