Halaman ini belum tersedia dalam bahasa lokal Anda. Kami berusaha keras untuk menambahkan lebih banyak versi bahasa. Terima kasih atas dukungan Anda.
- What's New
- Function Overview
- Service Overview
- Billing
- Getting Started
- User Guide
- Best Practices
-
Developer Guide
- Overview
- Using Native kubectl (Recommended)
- Namespace and Network
- Pod
- Label
- Deployment
- EIPPool
- EIP
- Pod Resource Monitoring Metric
- Collecting Pod Logs
- Managing Network Access Through Service and Ingress
- Using PersistentVolumeClaim to Apply for Persistent Storage
- ConfigMap and Secret
- Creating a Workload Using Job and Cron Job
- YAML Syntax
-
API Reference
- Before You Start
- Calling APIs
- Getting Started
- Proprietary APIs
-
Kubernetes APIs
- ConfigMap
- Pod
- StorageClass
- Service
-
Deployment
- Querying All Deployments
- Deleting All Deployments in a Namespace
- Querying Deployments in a Namespace
- Creating a Deployment
- Deleting a Deployment
- Querying a Deployment
- Updating a Deployment
- Replacing a Deployment
- Querying the Scaling Operation of a Specified Deployment
- Updating the Scaling Operation of a Specified Deployment
- Replacing the Scaling Operation of a Specified Deployment
- Querying the Status of a Deployment
- Ingress
- OpenAPIv2
- VolcanoJob
- Namespace
- ClusterRole
- Secret
- Endpoint
- ResourceQuota
- CronJob
-
API groups
- Querying API Versions
- Querying All APIs of v1
- Querying an APIGroupList
- Querying APIGroup (/apis/apps)
- Querying APIs of apps/v1
- Querying an APIGroup (/apis/batch)
- Querying an APIGroup (/apis/batch.volcano.sh)
- Querying All APIs of batch.volcano.sh/v1alpha1
- Querying All APIs of batch/v1
- Querying All APIs of batch/v1beta1
- Querying an APIGroup (/apis/crd.yangtse.cni)
- Querying All APIs of crd.yangtse.cni/v1
- Querying an APIGroup (/apis/extensions)
- Querying All APIs of extensions/v1beta1
- Querying an APIGroup (/apis/metrics.k8s.io)
- Querying All APIs of metrics.k8s.io/v1beta1
- Querying an APIGroup (/apis/networking.cci.io)
- Querying All APIs of networking.cci.io/v1beta1
- Querying an APIGroup (/apis/rbac.authorization.k8s.io)
- Querying All APIs of rbac.authorization.k8s.io/v1
- Event
- PersistentVolumeClaim
- RoleBinding
- StatefulSet
- Job
- ReplicaSet
- Data Structure
- Permissions Policies and Supported Actions
- Appendix
- Out-of-Date APIs
- Change History
-
FAQs
- Product Consulting
-
Basic Concept FAQs
- What Is CCI?
- What Are the Differences Between Cloud Container Instance and Cloud Container Engine?
- What Is an Environment Variable?
- What Is a Service?
- What Is Mcore?
- What Are the Relationships Between Images, Containers, and Workloads?
- What Are Kata Containers?
- Can kubectl Be Used to Manage Container Instances?
- What Are Core-Hours in CCI Resource Packages?
- Workload Abnormalities
-
Container Workload FAQs
- Why Service Performance Does Not Meet the Expectation?
- How Do I Set the Quantity of Instances (Pods)?
- How Do I Check My Resource Quotas?
- How Do I Set Probes for a Workload?
- How Do I Configure an Auto Scaling Policy?
- What Do I Do If the Workload Created from the sample Image Fails to Run?
- How Do I View Pods After I Call the API to Delete a Deployment?
- Why an Error Is Reported When a GPU-Related Operation Is Performed on the Container Entered by Using exec?
- Can I Start a Container in Privileged Mode When Running the systemctl Command in a Container in a CCI Cluster?
- Why Does the Intel oneAPI Toolkit Fail to Run VASP Tasks Occasionally?
- Why Are Pods Evicted?
- Why Is the Workload Web-Terminal Not Displayed on the Console?
- Why Are Fees Continuously Deducted After I Delete a Workload?
-
Image Repository FAQs
- Can I Export Public Images?
- How Do I Create a Container Image?
- How Do I Upload Images?
- Does CCI Provide Base Container Images for Download?
- Does CCI Administrator Have the Permission to Upload Image Packages?
- What Permissions Are Required for Uploading Image Packages for CCI?
- What Do I Do If Authentication Is Required During Image Push?
-
Network Management FAQs
- How Do I View the VPC CIDR Block?
- Does CCI Support Load Balancing?
- How Do I Configure the DNS Service on CCI?
- Does CCI Support InfiniBand (IB) Networks?
- How Do I Access a Container from a Public Network?
- How Do I Access a Public Network from a Container?
- What Do I Do If Access to a Workload from a Public Network Fails?
- What Do I Do If Error 504 Is Reported When I Access a Workload?
- What Do I Do If the Connection Timed Out?
- Storage Management FAQs
- Log Collection
- Account
- SDK Reference
- Videos
- General Reference
Copied.
Scheduling Pods to CCI
Overview
This section describes how you can schedule workloads to CCI when needed.
There are two methods to manage pods in a CCE cluster so that the workloads can be scheduled to CCI.
Constraints
- Only when native labels of a workload are matched by ScheduleProfile, you can use ScheduleProfile to manage the workload and schedule the workload to CCI. For example, the labels added to a workload through ExtensionProfile cannot be matched by ScheduleProfile. For this reason, the workload cannot be scheduled by ScheduleProfile.
- If resources fail to be scheduled to CCI, the bursting node will be locked and cannot be scheduled to CCI for half an hour. You can use kubectl to check the status of the bursting node on the CCE cluster console. If the node is locked, you can manually unlock it.
Scheduling Policies
Scheduling Policy |
Policy Diagram |
Application Scenario |
---|---|---|
Forcible scheduling (enforce) |
|
Workloads are forcibly scheduled to CCI. |
Local priority scheduling (localPrefer) |
|
Workloads are preferentially scheduled to a CCE cluster. If cluster resources are insufficient, Workloads are elastically scheduled to CCI. |
Disable scheduling (off) |
|
Workloads will not be scheduled to CCI. |
Method 1: Using a Label
- Create the workload to be scheduled to CCI on the CCE cluster console. When creating the workload, select any policy for Workloads except Disable scheduling.
NOTE:
When creating a workload on the CCE console, you can select either bursting-node or virtual-kubelet. If you use CCI 1.0, select virtual-kubelet. If you use CCI 2.0, select bursting-node. Currently, CCI 2.0 is available only to whitelisted users. To use the service, submit a service ticket.
- Editing the YAML file on a CCE cluster node to schedule the workloads to CCI. Install the bursting add-on. Log in to the CCE cluster node and add the virtual-kubelet.io/burst-to-cci label in the YAML file of the workload.
apiVersion: apps/v1 kind: Deployment metadata: name: test namespace: default labels: virtual-kubelet.io/burst-to-cci: 'auto' # Schedules the workload to CCI. spec: replicas: 2 selector: matchLabels: app: test template: metadata: labels: app: test spec: containers: - image: 'nginx:perl' name: container-0 resources: requests: cpu: 250m memory: 512Mi limits: cpu: 250m memory: 512Mi volumeMounts: [] imagePullSecrets: - name: default-secret
Method 2: Specifying a Profile
- Log in to a CCE cluster node to create a profile using the YAML file.
vi profile.yaml
- Configure maxNum and scaleDownPriority for local to limit the maximum number of pods in a CCE cluster. The following is an example:
apiVersion: scheduling.cci.io/v1 kind: ScheduleProfile metadata: name: test-cci-profile namespace: default spec: objectLabels: matchLabels: app: nginx strategy: localPrefer location: local: maxNum: 20 # maxNum can be configured either for local or cci. scaleDownPriority: 10 cci: {}
- Configure maxNum and scaleDownPriority for cci to limit the maximum number of pods in CCI. The following is an example:
apiVersion: scheduling.cci.io/v1 kind: ScheduleProfile metadata: name: test-cci-profile namespace: default spec: objectLabels: matchLabels: app: nginx strategy: localPrefer location: local: {} cci: maxNum: 20 # maxNum can be configured either for local or cci. scaleDownPriority: 10
NOTE:- strategy: scheduling policy The value can be auto, enforce, or localPrefer. For details, see Scheduling Policies.
- location: There are maxNum and scaleDownPriority. maxNum indicates the maximum number of pods on the on-premises infrastructure or cloud and its value ranges from 0 to 32. scaleDownPriority indicates the pod scale-in priority and its value ranges from -100 to 100.
- maxNum can be configured either for local or cci.
- Scale-in priority is optional. If it is not specified, the default value is set to nil.
- Create a profile for the CCE cluster.
kubectl apply -f profile.yaml
- Create a Deployment, use the selector to select the pods labeled with app:nginx, and associate the pods with the profile.
kind: Deployment apiVersion: apps/v1 metadata: name: nginx spec: replicas: 10 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: container-1 image: nginx:latest imagePullPolicy: IfNotPresent resources: requests: cpu: 250m memory: 512Mi limits: cpu: 250m memory: 512Mi imagePullSecrets: - name: default-secret
Table 1 Special scenarios Scenario
How to Schedule
Both a label and a profile are used to schedule the workload to CCI.
The scheduling priority of the label is higher than that of the profile.
For example, if the scheduling policy of the label is off and the scheduling policy of the profile is enforce, the workloads will not be scheduled to CCI.
There are multiple profiles specified for a pod.
A pod can only have one profile. If a pod has multiple profiles, the profile that can associate the maximum of labels is used. If there are multiple profiles that can associate an equal number of labels, the profile whose name has the smallest alphabetical order is used.
In this figure, the pod is finally associated with profileA.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot