- What's New
- Product Bulletin
- Service Overview
- Billing
- Getting Started
-
User Guide
-
UCS Clusters
- Overview
- Huawei Cloud Clusters
-
On-Premises Clusters
- Overview
- Service Planning for On-Premises Cluster Installation
- Registering an On-Premises Cluster
- Installing an On-Premises Cluster
- Managing an On-Premises Cluster
- Attached Clusters
- Multi-Cloud Clusters
- Single-Cluster Management
- Fleets
-
Cluster Federation
- Overview
- Enabling Cluster Federation
- Using kubectl to Connect to a Federation
- Upgrading a Federation
-
Workloads
- Workload Creation
-
Container Settings
- Setting Basic Container Information
- Setting Container Specifications
- Setting Container Lifecycle Parameters
- Setting Health Check for a Container
- Setting Environment Variables
- Configuring a Workload Upgrade Policy
- Configuring a Scheduling Policy (Affinity/Anti-affinity)
- Configuring Scheduling and Differentiation
- Managing a Workload
- ConfigMaps and Secrets
- Services and Ingresses
- MCI
- MCS
- DNS Policies
- Storage
- Namespaces
- Multi-Cluster Workload Scaling
- Adding Labels and Taints to a Cluster
- RBAC Authorization for Cluster Federations
- Image Repositories
- Permissions
-
Policy Center
- Overview
- Basic Concepts
- Enabling Policy Center
- Creating and Managing Policy Instances
- Example: Using Policy Center for Kubernetes Resource Compliance Governance
-
Policy Definition Library
- Overview
- k8spspvolumetypes
- k8spspallowedusers
- k8spspselinuxv2
- k8spspseccomp
- k8spspreadonlyrootfilesystem
- k8spspprocmount
- k8spspprivilegedcontainer
- k8spsphostnetworkingports
- k8spsphostnamespace
- k8spsphostfilesystem
- k8spspfsgroup
- k8spspforbiddensysctls
- k8spspflexvolumes
- k8spspcapabilities
- k8spspapparmor
- k8spspallowprivilegeescalationcontainer
- k8srequiredprobes
- k8srequiredlabels
- k8srequiredannotations
- k8sreplicalimits
- noupdateserviceaccount
- k8simagedigests
- k8sexternalips
- k8sdisallowedtags
- k8sdisallowanonymous
- k8srequiredresources
- k8scontainerratios
- k8scontainerrequests
- k8scontainerlimits
- k8sblockwildcardingress
- k8sblocknodeport
- k8sblockloadbalancer
- k8sblockendpointeditdefaultrole
- k8spspautomountserviceaccounttokenpod
- k8sallowedrepos
- Configuration Management
- Traffic Distribution
- Observability
- Container Migration
- Pipeline
- Error Codes
-
UCS Clusters
- Best Practices
-
API Reference
- Before You Start
- Calling APIs
-
API
- UCS Cluster
-
Fleet
- Adding a Cluster to a Fleet
- Removing a Cluster from a Fleet
- Registering a Fleet
- Deleting a Fleet
- Querying a Fleet
- Adding Clusters to a Fleet
- Updating Fleet Description
- Updating Permission Policies Associated with a Fleet
- Updating the Zone Associated with the Federation of a Fleet
- Obtaining the Fleet List
- Enabling Fleet Federation
- Disabling Cluster Federation
- Querying Federation Enabling Progress
- Creating a Federation Connection and Downloading kubeconfig
- Creating a Federation Connection
- Downloading Federation kubeconfig
- Permissions Management
- Using the Karmada API
- Appendix
-
FAQs
- About UCS
-
Billing
- How Is UCS Billed?
- What Status of a Cluster Will Incur UCS Charges?
- Why Am I Still Being Billed After I Purchase a Resource Package?
- How Do I Change the Billing Mode of a Cluster from Pay-per-Use to Yearly/Monthly?
- What Types of Invoices Are There?
- Can I Unsubscribe from or Modify a Resource Package?
-
Permissions
- How Do I Configure Access Permissions for Each Function of the UCS Console?
- What Can I Do If an IAM User Cannot Obtain Cluster or Fleet Information After Logging In to UCS?
- How Do I Restore ucs_admin_trust I Deleted or Modified?
- What Can I Do If I Cannot Associate the Permission Policy with a Fleet or Cluster?
- How Do I Clear RBAC Resources After a Cluster Is Unregistered?
- Policy Center
-
Fleets
- What Can I Do If Cluster Federation Verification Fails to Be Enabled for a Fleet?
- What Can I Do If an Abnormal, Federated Cluster Fails to Be Removed from the Fleet?
- What Can I Do If an Nginx Ingress Is in the Unready State After Being Deployed?
- What Can I Do If "Error from server (Forbidden)" Is Displayed When I Run the kubectl Command?
- Huawei Cloud Clusters
- Attached Clusters
-
On-Premises Clusters
- What Can I Do If an On-Premises Cluster Fails to Be Connected?
- How Do I Manually Clear Nodes of an On-Premises Cluster?
- How Do I Downgrade a cgroup?
- What Can I Do If the VM SSH Connection Times Out?
- How Do I Expand the Disk Capacity of the CIA Add-on in an On-Premises Cluster?
- What Can I Do If the Cluster Console Is Unavailable After the Master Node Is Shut Down?
- What Can I Do If a Node Is Not Ready After Its Scale-Out?
- How Do I Update the CA/TLS Certificate of an On-Premises Cluster?
- What Can I Do If an On-Premises Cluster Fails to Be Installed?
- Multi-Cloud Clusters
-
Cluster Federation
- What Can I Do If the Pre-upgrade Check of the Cluster Federation Fails?
- What Can I Do If a Cluster Fails to Be Added to a Federation?
- What Can I Do If Status Verification Fails When Clusters Are Added to a Federation?
- What Can I Do If an HPA Created on the Cluster Federation Management Plane Fails to Be Distributed to Member Clusters?
- What Can I Do If an MCI Object Fails to Be Created?
- What Can I Do If I Fail to Access a Service Through MCI?
- What Can I Do If an MCS Object Fails to Be Created?
- What Can I Do If an MCS or MCI Instance Fails to Be Deleted?
- Traffic Distribution
- Container Intelligent Analysis
- General Reference
Copied.
Overview
Why Workload Scaling?
The ever-changing application traffic brings changing resource requirements to container workloads. During workload deployment and management, if resources are reserved for a workload based on the service requirements at peak hours, a large number of resources will be wasted. If a resource threshold is set for a workload, applications may be abnormal when the resource usage exceeds the threshold. In Kubernetes, a Horizontal Pod Autoscaler (HPA) can automatically scale in or out pods for workloads in a single cluster in response to metric changes. However, the HPA does not apply to multi-cluster scenarios.
UCS provides you with automatic workload scaling in multi-cluster scenarios. The automatic workload scaling is based on metric changes or at regular intervals, which raises scaling flexibility and stability.
Advantages
UCS workload scaling has the following advantages:
- Multi-cluster: You can configure the same scaling policy for multiple clusters in the federation.
- High availability: Pods in your workload can be quickly scaled out at peak hours to ensure workload availability, or scaled in at off-peak hours to save resources.
- Multi-function: Pods in your workload can be scaled in or out based on metric changes or at regular intervals in complex scenarios.
- Multi-scenario: You can configure scaling policies for online services, large-scale computing and training, and training and inference on deep learning GPUs or shared GPUs.
Working Principles
UCS workload scaling is implemented by FederatedHPA and CronFederatedHPA, as shown in Figure 1.
- FederatedHPA can automatically scale in or out pods for workloads in response to system metrics or custom metrics. When the metric reaches the desired value, workload scaling is triggered.
- CronFederatedHPA can automatically scale in or out pods for workloads at regular intervals. When the triggering time arrives, workload scaling is triggered.
Constraints
- UCS scaling policies apply only to Deployments. For details about the comparisons among different types of workloads, see Workloads.
- UCS scaling policies are used to scale in or out pods for workloads. To schedule the pods to specific clusters, you need to configure scheduling policies.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot