Scheduling Pods to CCI
Overview
This section describes how you can schedule workloads to CCI when needed.
The following methods are provided for you to schedule the workloads in a CCE cluster to CCI:
Constraints
- Only when native labels of a workload are matched by ScheduleProfile, you can use ScheduleProfile to manage the workload and schedule the workload to CCI. For example, the labels added to a workload through ExtensionProfile cannot be matched by ScheduleProfile. For this reason, the workload cannot be scheduled by ScheduleProfile.
- You can use labels to control pod scheduling to CCI. You need to add a label before a workload is created. If you add a label to an existing workload, the pods for the workload will not be updated. In this case, you can select the workload and choose More > Redeploy for update.
Scheduling Policies
Scheduling Policy |
Policy Diagram |
Application Scenario |
---|---|---|
Forcible scheduling (enforce) |
|
Workloads are forcibly scheduled to CCI. |
Automatic scheduling (auto) |
|
Workloads are scheduled to CCI based on the scoring results provided by the cluster scheduler. |
Local priority scheduling (localPrefer) |
|
Workloads are preferentially scheduled to a CCE cluster. If cluster resources are insufficient, Workloads are elastically scheduled to CCI. |
Scheduling disabled (off) |
|
Workloads will not be scheduled to CCI. |
Method 1: Using a Label
- Create the workload to be scheduled to CCI on the CCE cluster console. When creating the workload, select any policy for Workloads except Disable scheduling.
When creating a workload on the CCE console, you can select either bursting-node or virtual-kubelet. If you use CCI 1.0, select virtual-kubelet. If you use CCI 2.0, select bursting-node. Currently, CCI 2.0 is available only to whitelisted users. To use the service, submit a service ticket.
- Editing the YAML file on a CCE cluster node to schedule the workloads to CCI. Install the bursting add-on. Log in to the CCE cluster node and add the virtual-kubelet.io/burst-to-cci label in the YAML file of the workload.
apiVersion: apps/v1 kind: Deployment metadata: name: test namespace: default labels: virtual-kubelet.io/burst-to-cci: 'auto' # Schedules the workload to CCI. spec: replicas: 2 selector: matchLabels: app: test template: metadata: labels: app: test spec: containers: - image: 'nginx:perl' name: container-0 resources: requests: cpu: 250m memory: 512Mi limits: cpu: 250m memory: 512Mi volumeMounts: [] imagePullSecrets: - name: default-secret
Table 1 Key parameters Parameter
Type
Description
virtual-kubelet.io/burst-to-cci
String
Policy for automatically scheduling CCE cluster workloads to CCI. The options are as follows:- enforce: Workloads are forcibly scheduled to CCI.
- auto: Workloads are scheduled to CCI based on the scoring results provided by the cluster scheduler.
- localPrefer: Workloads are preferentially scheduled to a CCE cluster. If cluster resources are insufficient, Workloads are elastically scheduled to CCI.
- off: Workloads will not be scheduled to CCI.
Method 2: Specifying a Profile on the Console
Prerequisites
- The CCE Cloud Bursting Engine for CCI add-on has been installed and the add-on version is 1.3.19 or later.
- Only CCE standard and CCE Turbo clusters that use the VPC network model are supported.
Procedure
- Log in to the CCE console and click the cluster name to go to the cluster console.
- In the navigation pane, choose Policies > CCI Scaling Policies.
- Click Create CCI Scaling Policy and configure the parameters.
Table 2 Parameters for creating a CCI scaling policy Parameter
Description
Policy Name
Enter a policy name.
Namespace
Select the namespace where the scheduling policy applies. You can select an existing namespace or create a namespace. For details, see Creating a Namespace.
Workload
Enter a key and value or reference a workload label.
Scheduling Policy
Select a scheduling policy.
- Local priority scheduling: Pods will be preferentially scheduled to nodes in the current CCE cluster. When the node resources are insufficient, pods will be scheduled to CCI.
- Force scheduling: All pods will be scheduled to CCI.
Scale to
- Local: Set the current CCE cluster.
- CCI: Set the maximum number of pods that can run on CCI.
Maximum Pods
Enter the maximum number of pods that can run in the CCE cluster or on CCI.
CCE Scale-in Priority
Value range: -100 to 100. A larger value indicates a higher priority.
CCI Scale-in Priority
Value range: -100 to 100. A larger value indicates a higher priority.
CCI Resource Pool
- CCI 2.0 (bursting-node): next-generation serverless resource pool
- CCI 1.0 (virtual-kubelet): existing serverless resource pool, which will be unavailable soon.
- Click OK.
- In the navigation pane, choose Workloads. Then click Create Workload. In Advanced Settings > Labels and Annotations, add a pod label. The key and value must be the same as those for the workload in 3. For other parameters, see Creating a Workload.
- Click Create Workload.
Method 3: Specifying a Profile in YAML
- Log in to a CCE cluster node to create a profile using the YAML file.
vi profile.yaml
- Configure maxNum and scaleDownPriority for local to limit the maximum number of pods in a CCE cluster. The following is an example:
apiVersion: scheduling.cci.io/v1 kind: ScheduleProfile metadata: name: test-cci-profile namespace: default spec: objectLabels: matchLabels: app: nginx strategy: localPrefer location: local: maxNum: 20 # maxNum can be configured either for local or cci. scaleDownPriority: 10 cci: {}
- Configure maxNum and scaleDownPriority for cci to limit the maximum number of pods in CCI. The following is an example:
apiVersion: scheduling.cci.io/v1 kind: ScheduleProfile metadata: name: test-cci-profile namespace: default spec: objectLabels: matchLabels: app: nginx strategy: localPrefer location: local: {} cci: maxNum: 20 # maxNum can be configured either for local or cci. scaleDownPriority: 10
Table 3 Key parameters Parameter
Disk Type
Description
strategy
String
Policy for automatically scheduling CCE cluster workloads to CCI. The options are as follows:- enforce: Workloads are forcibly scheduled to CCI.
- auto: Workloads are scheduled to CCI based on the scoring results provided by the cluster scheduler.
- localPrefer: Workloads are preferentially scheduled to a CCE cluster. If cluster resources are insufficient, Workloads are elastically scheduled to CCI.
maxNum
int
Maximum number of pods.
The value ranges from 0 to int32.
scaleDownPriority
int
Scale-in priority. A larger value indicates a higher priority.
The value ranges from -100 to 100.
- In the location field, configure the local field for CCE and cci for CCI to control the number of pods and scale-in priority.
- maxNum can be configured either for local or cci.
- Scale-in priority is optional. If it is not specified, the default value is set to nil.
- Create a profile for the CCE cluster.
kubectl apply -f profile.yaml
- Create a Deployment, use the selector to select the pods labeled with app:nginx, and associate the pods with the profile.
kind: Deployment apiVersion: apps/v1 metadata: name: nginx spec: replicas: 10 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: container-1 image: nginx:latest imagePullPolicy: IfNotPresent resources: requests: cpu: 250m memory: 512Mi limits: cpu: 250m memory: 512Mi imagePullSecrets: - name: default-secret
Table 4 Special scenarios Scenario
How to Schedule
Both a label and a profile are used to schedule the workload to CCI.
The scheduling priority of the label is higher than that of the profile.
For example, if the scheduling policy of the label is off and the scheduling policy of the profile is enforce, the workloads will not be scheduled to CCI.
There are multiple profiles specified for a pod.
A pod can only have one profile. If a pod has multiple profiles, the profile that can associate the maximum of labels is used. If there are multiple profiles that can associate an equal number of labels, the profile whose name has the smallest alphabetical order is used.
In this figure, the pod is finally associated with profileA.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot