- What's New
- Product Bulletin
- Service Overview
- Billing
- Getting Started
-
User Guide
-
UCS Clusters
- Overview
- Huawei Cloud Clusters
-
On-Premises Clusters
- Overview
- Service Planning for On-Premises Cluster Installation
- Registering an On-Premises Cluster
- Installing an On-Premises Cluster
- Managing an On-Premises Cluster
- Attached Clusters
- Multi-Cloud Clusters
- Single-Cluster Management
- Fleets
-
Cluster Federation
- Overview
- Enabling Cluster Federation
- Using kubectl to Connect to a Federation
- Upgrading a Federation
-
Workloads
- Workload Creation
-
Container Settings
- Setting Basic Container Information
- Setting Container Specifications
- Setting Container Lifecycle Parameters
- Setting Health Check for a Container
- Setting Environment Variables
- Configuring a Workload Upgrade Policy
- Configuring a Scheduling Policy (Affinity/Anti-affinity)
- Configuring Scheduling and Differentiation
- Managing a Workload
- ConfigMaps and Secrets
- Services and Ingresses
- MCI
- MCS
- DNS Policies
- Storage
- Namespaces
- Multi-Cluster Workload Scaling
- Adding Labels and Taints to a Cluster
- RBAC Authorization for Cluster Federations
- Image Repositories
- Permissions
-
Policy Center
- Overview
- Basic Concepts
- Enabling Policy Center
- Creating and Managing Policy Instances
- Example: Using Policy Center for Kubernetes Resource Compliance Governance
-
Policy Definition Library
- Overview
- k8spspvolumetypes
- k8spspallowedusers
- k8spspselinuxv2
- k8spspseccomp
- k8spspreadonlyrootfilesystem
- k8spspprocmount
- k8spspprivilegedcontainer
- k8spsphostnetworkingports
- k8spsphostnamespace
- k8spsphostfilesystem
- k8spspfsgroup
- k8spspforbiddensysctls
- k8spspflexvolumes
- k8spspcapabilities
- k8spspapparmor
- k8spspallowprivilegeescalationcontainer
- k8srequiredprobes
- k8srequiredlabels
- k8srequiredannotations
- k8sreplicalimits
- noupdateserviceaccount
- k8simagedigests
- k8sexternalips
- k8sdisallowedtags
- k8sdisallowanonymous
- k8srequiredresources
- k8scontainerratios
- k8scontainerrequests
- k8scontainerlimits
- k8sblockwildcardingress
- k8sblocknodeport
- k8sblockloadbalancer
- k8sblockendpointeditdefaultrole
- k8spspautomountserviceaccounttokenpod
- k8sallowedrepos
- Configuration Management
- Traffic Distribution
- Observability
- Container Migration
- Pipeline
- Error Codes
-
UCS Clusters
- Best Practices
-
API Reference
- Before You Start
- Calling APIs
-
API
- UCS Cluster
-
Fleet
- Adding a Cluster to a Fleet
- Removing a Cluster from a Fleet
- Registering a Fleet
- Deleting a Fleet
- Querying a Fleet
- Adding Clusters to a Fleet
- Updating Fleet Description
- Updating Permission Policies Associated with a Fleet
- Updating the Zone Associated with the Federation of a Fleet
- Obtaining the Fleet List
- Enabling Fleet Federation
- Disabling Cluster Federation
- Querying Federation Enabling Progress
- Creating a Federation Connection and Downloading kubeconfig
- Creating a Federation Connection
- Downloading Federation kubeconfig
- Permissions Management
- Using the Karmada API
- Appendix
-
FAQs
- About UCS
-
Billing
- How Is UCS Billed?
- What Status of a Cluster Will Incur UCS Charges?
- Why Am I Still Being Billed After I Purchase a Resource Package?
- How Do I Change the Billing Mode of a Cluster from Pay-per-Use to Yearly/Monthly?
- What Types of Invoices Are There?
- Can I Unsubscribe from or Modify a Resource Package?
-
Permissions
- How Do I Configure Access Permissions for Each Function of the UCS Console?
- What Can I Do If an IAM User Cannot Obtain Cluster or Fleet Information After Logging In to UCS?
- How Do I Restore ucs_admin_trust I Deleted or Modified?
- What Can I Do If I Cannot Associate the Permission Policy with a Fleet or Cluster?
- How Do I Clear RBAC Resources After a Cluster Is Unregistered?
- Policy Center
-
Fleets
- What Can I Do If Cluster Federation Verification Fails to Be Enabled for a Fleet?
- What Can I Do If an Abnormal, Federated Cluster Fails to Be Removed from the Fleet?
- What Can I Do If an Nginx Ingress Is in the Unready State After Being Deployed?
- What Can I Do If "Error from server (Forbidden)" Is Displayed When I Run the kubectl Command?
- Huawei Cloud Clusters
- Attached Clusters
-
On-Premises Clusters
- What Can I Do If an On-Premises Cluster Fails to Be Connected?
- How Do I Manually Clear Nodes of an On-Premises Cluster?
- How Do I Downgrade a cgroup?
- What Can I Do If the VM SSH Connection Times Out?
- How Do I Expand the Disk Capacity of the CIA Add-on in an On-Premises Cluster?
- What Can I Do If the Cluster Console Is Unavailable After the Master Node Is Shut Down?
- What Can I Do If a Node Is Not Ready After Its Scale-Out?
- How Do I Update the CA/TLS Certificate of an On-Premises Cluster?
- What Can I Do If an On-Premises Cluster Fails to Be Installed?
- Multi-Cloud Clusters
-
Cluster Federation
- What Can I Do If the Pre-upgrade Check of the Cluster Federation Fails?
- What Can I Do If a Cluster Fails to Be Added to a Federation?
- What Can I Do If Status Verification Fails When Clusters Are Added to a Federation?
- What Can I Do If an HPA Created on the Cluster Federation Management Plane Fails to Be Distributed to Member Clusters?
- What Can I Do If an MCI Object Fails to Be Created?
- What Can I Do If I Fail to Access a Service Through MCI?
- What Can I Do If an MCS Object Fails to Be Created?
- What Can I Do If an MCS or MCI Instance Fails to Be Deleted?
- Traffic Distribution
- Container Intelligent Analysis
- General Reference
Copied.
How FederatedHPA Works
FederatedHPA can automatically scale in or out pods for workloads in response to system metrics (CPU usage and memory usage) or custom metrics.
FederatedHPAs and scheduling policies can be used together to implement various functions. For example, after a FederatedHPA scales out pods in your workload, you can configure a scheduling policy to schedule the pods to clusters with more resources. This solves the resource limitation of a single cluster and improves the fault recovery capability.
How FederatedHPA Works
Figure 1 shows the working principle of FederatedHPA. The details are as follows:
- The HPA controller periodically requests metrics data of a workload from either the system metrics API or the custom metrics API.
- After receiving the metric query request, karmada-apiserver routes the request to karmada-metrics-adapter that was registered through its API.
- After receiving the request, karmada-metrics-adapter collects the metrics data of the workload.
- karmada-metrics-adapter returns calculated metrics data to the HPA controller.
- The HPA controller calculates the desired number of pods based on the returned metrics data and maintains the stability of workload scaling.
How Do I Calculate Metrics Data?
There are system metrics and custom metrics. Their calculation methods are as follows:
- System metrics
There are two types of system metrics: CPU usage and memory usage. The system metrics can be queried and monitored through metrics API. For example, if you want to control the CPU usage of a workload at a reasonable level, you can create a FederatedHPA for the workload based on the CPU usage metric.
NOTE:
Usage = CPUs or memory used by pods in a workload/Requested CPUs or memory
- Custom metrics
You can create a FederatedHPA for a workload based on custom metrics such as requests per second and writes per second. The HPA controller then queries for these custom metrics from a series of APIs.
If you set multiple desired metric values when creating a FederatedHPA, the HPA controller evaluates each metric separately and uses the scaling algorithm to determine the new workload scale based on each one. The largest scale is selected for the autoscale operation.
How Do I Calculate the Desired Number of Pods?
The HPA controller operates on the scaling ratio between the desired metric value and current metric value and then uses that ratio to calculate the desired number of pods based on the current number of pods.
- Current number of pods = Number of pods in the Ready state in all clusters
When calculating the desired number of pods, the HPA controller chooses the largest recommendation based on the last five minutes to prevent subsequent autoscaling operations before the workload finishes responding to prior autoscaling operations.
- Desired number of pods = Current number of pods x (Current metric value/Desired metric value)
For example, if the current CPU usage is 100% and the desired CPU usage is 50%, the desired number of pods is twice the current number of pods.
How Do I Ensure the Stability of Workload Scaling?
To ensure the stability of workload scaling, the HPA controller is designed to provide the following functions:
- Stabilization window
When detecting that the metric data reaches the desired value (the scaling standard is met), the HPA controller continuously checks the metric data within stabilization window. If the result shows that the metric data continuously reaches the desired value, the HPA controller performs scaling. By default, the stabilization window is 0 seconds for a scale-out and 300 seconds for a scale-in. The values can be changed. In actual configuration, to avoid service jitter, a scale-out needs to be fast, and a scale-in needs to be slow.
- Tolerance
Tolerance = abs (Current metric value/Desired metric value – 1)
abs indicates an absolute value. If the metric value change is within the specified tolerance range, the scaling operation will not be triggered. The default value is 0.1 and cannot be changed.
For example, if you select the default settings when creating a FederatedHPA, a scale-in will be triggered when the metric value is more than 1.1 times the desired value and lasts for more than 300 seconds, and a scale-out will be triggered when the metric value is less than 0.9 times the desired value and lasts for more than 0 seconds.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot