Scheduling Pods to CCI
Overview
This section describes how you can schedule workloads to CCI when needed.
There are two methods to manage pods in a CCE cluster so that the workloads can be scheduled to CCI.
Constraints
- Only when native labels of a workload are matched by ScheduleProfile, you can use ScheduleProfile to manage the workload and schedule the workload to CCI. For example, the labels added to a workload through ExtensionProfile cannot be matched by ScheduleProfile. For this reason, the workload cannot be scheduled by ScheduleProfile.
- If resources fail to be scheduled to CCI, the bursting node will be locked and cannot be scheduled to CCI for half an hour. You can use kubectl to check the status of the bursting node on the CCE cluster console. If the node is locked, you can manually unlock it.
Scheduling Policies
Scheduling Policy |
Policy Diagram |
Application Scenario |
---|---|---|
Forcible scheduling (enforce) |
|
Workloads are forcibly scheduled to CCI. |
Local priority scheduling (localPrefer) |
|
Workloads are preferentially scheduled to a CCE cluster. If cluster resources are insufficient, Workloads are elastically scheduled to CCI. |
Disable scheduling (off) |
|
Workloads will not be scheduled to CCI. |
Method 1: Using a Label
- Create the workload to be scheduled to CCI on the CCE cluster console. When creating the workload, select any policy for Workloads except Disable scheduling.
When creating a workload on the CCE console, you can select either bursting-node or virtual-kubelet. If you use CCI 1.0, select virtual-kubelet. If you use CCI 2.0, select bursting-node. Currently, CCI 2.0 is available only to whitelisted users. To use the service, submit a service ticket.
- Editing the YAML file on a CCE cluster node to schedule the workloads to CCI. Install the bursting add-on. Log in to the CCE cluster node and add the virtual-kubelet.io/burst-to-cci label in the YAML file of the workload.
apiVersion: apps/v1 kind: Deployment metadata: name: test namespace: default labels: virtual-kubelet.io/burst-to-cci: 'auto' # Schedules the workload to CCI. spec: replicas: 2 selector: matchLabels: app: test template: metadata: labels: app: test spec: containers: - image: 'nginx:perl' name: container-0 resources: requests: cpu: 250m memory: 512Mi limits: cpu: 250m memory: 512Mi volumeMounts: [] imagePullSecrets: - name: default-secret
Method 2: Specifying a Profile
- Log in to a CCE cluster node to create a profile using the YAML file.
vi profile.yaml
- Configure maxNum and scaleDownPriority for local to limit the maximum number of pods in a CCE cluster. The following is an example:
apiVersion: scheduling.cci.io/v1 kind: ScheduleProfile metadata: name: test-cci-profile namespace: default spec: objectLabels: matchLabels: app: nginx strategy: localPrefer location: local: maxNum: 20 # maxNum can be configured either for local or cci. scaleDownPriority: 10 cci: {}
- Configure maxNum and scaleDownPriority for cci to limit the maximum number of pods in CCI. The following is an example:
apiVersion: scheduling.cci.io/v1 kind: ScheduleProfile metadata: name: test-cci-profile namespace: default spec: objectLabels: matchLabels: app: nginx strategy: localPrefer location: local: {} cci: maxNum: 20 # maxNum can be configured either for local or cci. scaleDownPriority: 10
- strategy: scheduling policy The value can be auto, enforce, or localPrefer. For details, see Scheduling Policies.
- location: There are maxNum and scaleDownPriority. maxNum indicates the maximum number of pods on the on-premises infrastructure or cloud and its value ranges from 0 to 32. scaleDownPriority indicates the pod scale-in priority and its value ranges from -100 to 100.
- maxNum can be configured either for local or cci.
- Scale-in priority is optional. If it is not specified, the default value is set to nil.
- Create a profile for the CCE cluster.
kubectl apply -f profile.yaml
- Create a Deployment, use the selector to select the pods labeled with app:nginx, and associate the pods with the profile.
kind: Deployment apiVersion: apps/v1 metadata: name: nginx spec: replicas: 10 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: container-1 image: nginx:latest imagePullPolicy: IfNotPresent resources: requests: cpu: 250m memory: 512Mi limits: cpu: 250m memory: 512Mi imagePullSecrets: - name: default-secret
Table 1 Special scenarios Scenario
How to Schedule
Both a label and a profile are used to schedule the workload to CCI.
The scheduling priority of the label is higher than that of the profile.
For example, if the scheduling policy of the label is off and the scheduling policy of the profile is enforce, the workloads will not be scheduled to CCI.
There are multiple profiles specified for a pod.
A pod can only have one profile. If a pod has multiple profiles, the profile that can associate the maximum of labels is used. If there are multiple profiles that can associate an equal number of labels, the profile whose name has the smallest alphabetical order is used.
In this figure, the pod is finally associated with profileA.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot