Updated on 2025-08-15 GMT+08:00

Horizontal Scaling

A HorizontalPodAutoscaler (HPA) is a resource object that can dynamically scale the number of pods running a Deployment based on certain metrics so that services running on the Deployment can adapt to metric changes. HPA automatically increases or decreases the number of pods based on the configured policy to meet your requirements.

Constraints

There are system components that help run the pods. These system components occupy some underlying resources, such as vCPU and memory. As a result, the resource usages for the pods may not reach the expected limits, and the threshold specified in the HPA policy may not be reached. To avoid this, refer to Reserved System Overhead.

Auto Scaling

  • If spec.metrics.resource.target.type is set to Utilization, you need to specify the resource requests value when creating a workload.
  • When spec.metrics.resource.target.type is set to Utilization, the resource usage is calculated as follows: Resource usage = Used resource/Available pod flavor. You can determine the actual flavor of a pod by referring to Pod Flavor.

A properly configured auto scaling policy includes the metric, threshold, and step. It eliminates the need to manually adjust resources in response to service changes and traffic bursts, thus helping you reduce workforce and resource consumption.

CCI 2.0 supports auto scaling based on metrics. The workloads are scaled out or in based on the vCPU or memory usage. You can specify a vCPU or memory usage threshold. If the usage is higher or lower than the threshold, pods are automatically added or deleted.

  • Configure a metric-based auto scaling policy.
    1. Log in to the CCI 2.0 console.
    2. In the navigation pane, choose Workloads. On the Deployments tab, locate the target Deployment and click its name.
    3. On the Auto Scaling tab, click Create from YAML to configure an auto scaling policy.

      The following describes the auto scaling policy file format:

      • Resource description in the hpa.yaml file
        kind: HorizontalPodAutoscaler
        apiVersion: cci/v2
        metadata:
          name: nginx              # Name of the HorizontalPodAutoscaler
          namespace: test          # Namespace of the HorizontalPodAutoscaler
        spec:
          scaleTargetRef:         # Reference the resource to be automatically scaled.
            kind: Deployment      # Type of the target resource, for example, Deployment
            name: nginx           # Name of the target resource
            apiVersion: cci/v2    # Version of the target resource
          minReplicas: 1          # Minimum number of replicas for HPA scaling
          maxReplicas: 5          # Maximum number of replicas for HPA scaling
          metrics:
            - type: Resource                 # Resource metrics are used.
              resource:
                name: memory                 # Resource name, for example, cpu or memory
                target:
                  type: Utilization          # Metric type. The value can be Utilization (percentage) or AverageValue (absolute value).
                  averageUtilization: 50     # Resource usage. For example, when the CPU usage reaches 50%, scale-out is triggered.
          behavior:
            scaleUp:
              stabilizationWindowSeconds: 30  # Scale-out stabilization duration, in seconds
              policies:
              - type: Pods           # Number of pods to be scaled
                value: 1
                periodSeconds: 30  # The check is performed once every 30 seconds.
            scaleDown:
              stabilizationWindowSeconds: 30  # Scale-in stabilization duration, in seconds
              policies:
              - type: Percent      # The resource is scaleed in or out based on the percentage of existing pods.
                value: 50
                periodSeconds: 30  # The check is performed once every 30 seconds.
      • Resource description in the hpa.json file
        {
        	"kind": "HorizontalPodAutoscaler",
        	"apiVersion": "cci/v2",
        	"metadata": {
        		"name": "nginx",             # Name of the the HorizontalPodAutoscaler
                        "namespace": "test"          # Namespace of the HorizontalPodAutoscaler
        	},
        	"spec": {
                        "scaleTargetRef": {         # Reference the resource to be automatically scaled.
        			"kind": "Deployment",        # Type of the target resource, for example, Deployment
        			"name": "nginx",             # Name of the target resource
        			"apiVersion": "cci/v2"       # Version of the target resource
        		},
        		"minReplicas": 1,             # Minimum number of replicas for HPA scaling
        		"maxReplicas": 5,             # Maximum number of replicas for HPA scaling
        		"metrics": [
        			{
        				"type": "Resource",             # Resource metrics are used.
        				"resource": {
        					"name": "memory",       # Resource name, for example, cpu or memory
        					"target": {
        						"type": "Utilization",           # Metric type. The value can be Utilization (percentage) or AverageValue (absolute value).
        						"averageUtilization": 50         # Resource usage. For example, when the CPU usage reaches 50%, scale-out is triggered.
        					}
        				}
        			}
        		],
        		"behavior": {
        			"scaleUp": {
        				"stabilizationWindowSeconds": 30,
        				"policies": [
        					{
        						"type": "Pods",
        						"value": 1,
        						"periodSeconds": 30
        					}
        				]
        			},
        			"scaleDown": {
        				"stabilizationWindowSeconds": 30,
        				"policies": [
        					{
        						"type": "Percent",
        						"value": 50,
        						"periodSeconds": 30
        					}
        				]
        			}
        		}
        	}
        }
    4. Click OK.
      You can view the auto scaling policy on the Auto Scaling tab.
      Figure 1 Auto scaling policy

      When the trigger condition is met, the auto scaling policy will be executed.