- What's New
- Product Bulletin
- Service Overview
- Billing
- Getting Started
-
User Guide
-
UCS Clusters
- Overview
- Huawei Cloud Clusters
-
On-Premises Clusters
- Overview
- Service Planning for On-Premises Cluster Installation
- Registering an On-Premises Cluster
- Installing an On-Premises Cluster
- Managing an On-Premises Cluster
- Attached Clusters
- Multi-Cloud Clusters
- Single-Cluster Management
- Fleets
-
Cluster Federation
- Overview
- Enabling Cluster Federation
- Using kubectl to Connect to a Federation
- Upgrading a Federation
-
Workloads
- Workload Creation
-
Container Settings
- Setting Basic Container Information
- Setting Container Specifications
- Setting Container Lifecycle Parameters
- Setting Health Check for a Container
- Setting Environment Variables
- Configuring a Workload Upgrade Policy
- Configuring a Scheduling Policy (Affinity/Anti-affinity)
- Configuring Scheduling and Differentiation
- Managing a Workload
- ConfigMaps and Secrets
- Services and Ingresses
- MCI
- MCS
- DNS Policies
- Storage
- Namespaces
- Multi-Cluster Workload Scaling
- Adding Labels and Taints to a Cluster
- RBAC Authorization for Cluster Federations
- Image Repositories
- Permissions
-
Policy Center
- Overview
- Basic Concepts
- Enabling Policy Center
- Creating and Managing Policy Instances
- Example: Using Policy Center for Kubernetes Resource Compliance Governance
-
Policy Definition Library
- Overview
- k8spspvolumetypes
- k8spspallowedusers
- k8spspselinuxv2
- k8spspseccomp
- k8spspreadonlyrootfilesystem
- k8spspprocmount
- k8spspprivilegedcontainer
- k8spsphostnetworkingports
- k8spsphostnamespace
- k8spsphostfilesystem
- k8spspfsgroup
- k8spspforbiddensysctls
- k8spspflexvolumes
- k8spspcapabilities
- k8spspapparmor
- k8spspallowprivilegeescalationcontainer
- k8srequiredprobes
- k8srequiredlabels
- k8srequiredannotations
- k8sreplicalimits
- noupdateserviceaccount
- k8simagedigests
- k8sexternalips
- k8sdisallowedtags
- k8sdisallowanonymous
- k8srequiredresources
- k8scontainerratios
- k8scontainerrequests
- k8scontainerlimits
- k8sblockwildcardingress
- k8sblocknodeport
- k8sblockloadbalancer
- k8sblockendpointeditdefaultrole
- k8spspautomountserviceaccounttokenpod
- k8sallowedrepos
- Configuration Management
- Traffic Distribution
- Observability
- Container Migration
- Pipeline
- Error Codes
-
UCS Clusters
- Best Practices
-
API Reference
- Before You Start
- Calling APIs
-
API
- UCS Cluster
-
Fleet
- Adding a Cluster to a Fleet
- Removing a Cluster from a Fleet
- Registering a Fleet
- Deleting a Fleet
- Querying a Fleet
- Adding Clusters to a Fleet
- Updating Fleet Description
- Updating Permission Policies Associated with a Fleet
- Updating the Zone Associated with the Federation of a Fleet
- Obtaining the Fleet List
- Enabling Fleet Federation
- Disabling Cluster Federation
- Querying Federation Enabling Progress
- Creating a Federation Connection and Downloading kubeconfig
- Creating a Federation Connection
- Downloading Federation kubeconfig
- Permissions Management
- Using the Karmada API
- Appendix
-
FAQs
- About UCS
-
Billing
- How Is UCS Billed?
- What Status of a Cluster Will Incur UCS Charges?
- Why Am I Still Being Billed After I Purchase a Resource Package?
- How Do I Change the Billing Mode of a Cluster from Pay-per-Use to Yearly/Monthly?
- What Types of Invoices Are There?
- Can I Unsubscribe from or Modify a Resource Package?
-
Permissions
- How Do I Configure Access Permissions for Each Function of the UCS Console?
- What Can I Do If an IAM User Cannot Obtain Cluster or Fleet Information After Logging In to UCS?
- How Do I Restore ucs_admin_trust I Deleted or Modified?
- What Can I Do If I Cannot Associate the Permission Policy with a Fleet or Cluster?
- How Do I Clear RBAC Resources After a Cluster Is Unregistered?
- Policy Center
-
Fleets
- What Can I Do If Cluster Federation Verification Fails to Be Enabled for a Fleet?
- What Can I Do If an Abnormal, Federated Cluster Fails to Be Removed from the Fleet?
- What Can I Do If an Nginx Ingress Is in the Unready State After Being Deployed?
- What Can I Do If "Error from server (Forbidden)" Is Displayed When I Run the kubectl Command?
- Huawei Cloud Clusters
- Attached Clusters
-
On-Premises Clusters
- What Can I Do If an On-Premises Cluster Fails to Be Connected?
- How Do I Manually Clear Nodes of an On-Premises Cluster?
- How Do I Downgrade a cgroup?
- What Can I Do If the VM SSH Connection Times Out?
- How Do I Expand the Disk Capacity of the CIA Add-on in an On-Premises Cluster?
- What Can I Do If the Cluster Console Is Unavailable After the Master Node Is Shut Down?
- What Can I Do If a Node Is Not Ready After Its Scale-Out?
- How Do I Update the CA/TLS Certificate of an On-Premises Cluster?
- What Can I Do If an On-Premises Cluster Fails to Be Installed?
- Multi-Cloud Clusters
-
Cluster Federation
- What Can I Do If the Pre-upgrade Check of the Cluster Federation Fails?
- What Can I Do If a Cluster Fails to Be Added to a Federation?
- What Can I Do If Status Verification Fails When Clusters Are Added to a Federation?
- What Can I Do If an HPA Created on the Cluster Federation Management Plane Fails to Be Distributed to Member Clusters?
- What Can I Do If an MCI Object Fails to Be Created?
- What Can I Do If I Fail to Access a Service Through MCI?
- What Can I Do If an MCS Object Fails to Be Created?
- What Can I Do If an MCS or MCI Instance Fails to Be Deleted?
- Traffic Distribution
- Container Intelligent Analysis
- General Reference
Show all
Copied.
Workload Auto Scaling (HPA)
Horizontal Pod Autoscaling (HPA) in Kubernetes implements horizontal scaling of pods. In a UCS HPA policy, you can configure different cooldown time windows and scaling thresholds for different applications based on the Kubernetes HPA.
Prerequisites
To use HPA, install either of the following add-ons that support metrics APIs: (For details, see Support for metrics APIs.)
- metrics-server: collects metrics from the Summary API exposed by kubelet and provides resource usage metrics such as the container CPU and memory usage
- For details about how to install metrics-server for an on-premises cluster, see metrics-server.
- For details about how to install metrics-server for other types of clusters, see the official documentation. For an attached cluster, you can also install metric-server provided by the corresponding vendor.
- Prometheus: an open source monitoring and alarming framework that collects metrics and provides basic resource metrics and custom metrics
Constraints
- At least one pod is available in the cluster. If no pod is available, pod scale-out will be performed.
- If no metric collection add-on has been installed in the cluster, the workload scaling policy cannot take effect.
- metrics-server can only be installed for on-premises clusters for calling the Metrics API. More add-ons will be available in the future.
Procedure
- Log in to the UCS console and access the cluster console.
- If the cluster is not added to any fleet, click the cluster name.
- If the cluster has been added to a fleet, click the fleet name. In the navigation pane, choose Clusters > Container Clusters.
- In the navigation pane, choose Workload Scaling. Then click Create HPA Policy in the upper right corner.
- Configure the parameters for the HPA policy.
Table 1 HPA policy parameters Parameter
Description
Policy Name
Enter a name for the policy.
Namespace
Select the namespace that the workload belongs to.
Associated Workload
Select the workload that the HPA policy is associated with.
Pod Range
Enter minimum and maximum numbers of pods.
When the policy is triggered, the workload pods are scaled within this range.
System Policy
- Metric: Select CPU usage or Memory usage.
NOTE:
Usage = CPU or memory used by pods/Requested CPU or memory
- Desired Value: Enter the desired average resource usage.
This parameter indicates the desired value of the selected metric.
Number of new pods required (rounded up) = Current metric value/Desired value x Number of current pods
NOTE:
When calculating the number of pods to be added or reduced, the HPA policy uses the maximum number of pods in the last 5 minutes.
- Tolerance Range: The default tolerance is 0.1. Enter the scale-in and scale-out thresholds. The desired metric value must be within this tolerance range.
If the metric value is greater than the scale-in threshold and less than the scale-out threshold, no scaling operation will be triggered. This parameter is available only in clusters of v1.15 or later.
NOTICE:You can configure multiple system policies.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot