- What's New
- Function Overview
- Service Overview
- Billing
- Getting Started
- User Guide
- Best Practices
-
Developer Guide
- Overview
- Using Native kubectl (Recommended)
- Namespace and Network
- Pod
- Label
- Deployment
- EIPPool
- EIP
- Pod Resource Monitoring Metric
- Collecting Pod Logs
- Managing Network Access Through Service and Ingress
- Using PersistentVolumeClaim to Apply for Persistent Storage
- ConfigMap and Secret
- Creating a Workload Using Job and Cron Job
- YAML Syntax
-
API Reference
- Before You Start
- Calling APIs
- Getting Started
- Proprietary APIs
-
Kubernetes APIs
- ConfigMap
- Pod
- StorageClass
- Service
-
Deployment
- Querying All Deployments
- Deleting All Deployments in a Namespace
- Querying Deployments in a Namespace
- Creating a Deployment
- Deleting a Deployment
- Querying a Deployment
- Updating a Deployment
- Replacing a Deployment
- Querying the Scaling Operation of a Specified Deployment
- Updating the Scaling Operation of a Specified Deployment
- Replacing the Scaling Operation of a Specified Deployment
- Querying the Status of a Deployment
- Ingress
- OpenAPIv2
- VolcanoJob
- Namespace
- ClusterRole
- Secret
- Endpoint
- ResourceQuota
- CronJob
-
API groups
- Querying API Versions
- Querying All APIs of v1
- Querying an APIGroupList
- Querying APIGroup (/apis/apps)
- Querying APIs of apps/v1
- Querying an APIGroup (/apis/batch)
- Querying an APIGroup (/apis/batch.volcano.sh)
- Querying All APIs of batch.volcano.sh/v1alpha1
- Querying All APIs of batch/v1
- Querying All APIs of batch/v1beta1
- Querying an APIGroup (/apis/crd.yangtse.cni)
- Querying All APIs of crd.yangtse.cni/v1
- Querying an APIGroup (/apis/extensions)
- Querying All APIs of extensions/v1beta1
- Querying an APIGroup (/apis/metrics.k8s.io)
- Querying All APIs of metrics.k8s.io/v1beta1
- Querying an APIGroup (/apis/networking.cci.io)
- Querying All APIs of networking.cci.io/v1beta1
- Querying an APIGroup (/apis/rbac.authorization.k8s.io)
- Querying All APIs of rbac.authorization.k8s.io/v1
- Event
- PersistentVolumeClaim
- RoleBinding
- StatefulSet
- Job
- ReplicaSet
- Data Structure
- Permissions Policies and Supported Actions
- Appendix
- Out-of-Date APIs
- Change History
-
FAQs
- Product Consulting
-
Basic Concept FAQs
- What Is CCI?
- What Are the Differences Between Cloud Container Instance and Cloud Container Engine?
- What Is an Environment Variable?
- What Is a Service?
- What Is Mcore?
- What Are the Relationships Between Images, Containers, and Workloads?
- What Are Kata Containers?
- Can kubectl Be Used to Manage Container Instances?
- What Are Core-Hours in CCI Resource Packages?
- Workload Abnormalities
-
Container Workload FAQs
- Why Service Performance Does Not Meet the Expectation?
- How Do I Set the Quantity of Instances (Pods)?
- How Do I Check My Resource Quotas?
- How Do I Set Probes for a Workload?
- How Do I Configure an Auto Scaling Policy?
- What Do I Do If the Workload Created from the sample Image Fails to Run?
- How Do I View Pods After I Call the API to Delete a Deployment?
- Why an Error Is Reported When a GPU-Related Operation Is Performed on the Container Entered by Using exec?
- Can I Start a Container in Privileged Mode When Running the systemctl Command in a Container in a CCI Cluster?
- Why Does the Intel oneAPI Toolkit Fail to Run VASP Tasks Occasionally?
- Why Are Pods Evicted?
- Why Is the Workload Web-Terminal Not Displayed on the Console?
- Why Are Fees Continuously Deducted After I Delete a Workload?
-
Image Repository FAQs
- Can I Export Public Images?
- How Do I Create a Container Image?
- How Do I Upload Images?
- Does CCI Provide Base Container Images for Download?
- Does CCI Administrator Have the Permission to Upload Image Packages?
- What Permissions Are Required for Uploading Image Packages for CCI?
- What Do I Do If Authentication Is Required During Image Push?
-
Network Management FAQs
- How Do I View the VPC CIDR Block?
- Does CCI Support Load Balancing?
- How Do I Configure the DNS Service on CCI?
- Does CCI Support InfiniBand (IB) Networks?
- How Do I Access a Container from a Public Network?
- How Do I Access a Public Network from a Container?
- What Do I Do If Access to a Workload from a Public Network Fails?
- What Do I Do If Error 504 Is Reported When I Access a Workload?
- What Do I Do If the Connection Timed Out?
- Storage Management FAQs
- Log Collection
- Account
- SDK Reference
- Videos
- General Reference
Copied.
Quick Start
Overview
The bursting add-on functions as a virtual kubelet to connect Kubernetes clusters to APIs of other platforms. This add-on is mainly used to extend Kubernetes APIs to serverless container services such as Huawei Cloud CCI.
With this add-on, you can schedule Deployments, StatefulSets, jobs, and CronJobs running in CCE clusters to CCI during peak hours. In this way, you can reduce consumption caused by cluster scaling.
Constraints
- Only CCE standard and CCE Turbo clusters that use the VPC network mode are supported. Arm clusters are not supported. Add-on instances will not be deployed on Arm nodes, if any, running in the cluster.
- DaemonSets and pods that use the HostNetwork mode cannot be scheduled to CCI.
- The subnet where the cluster is located cannot overlap with 10.247.0.0/16, or the subnet conflicts with the Service CIDR block in the CCI namespace.
- Currently, Volcano cannot be used to schedule pods with cloud storage volumes mounted to CCI.
Precautions
- Before using the add-on, go to the CCI console to grant CCI with the permissions to use CCE.
- After the add-on is installed, a namespace named cce-burst-{Cluster ID} will be created in CCI and managed by the add-on. Do not use this namespace when manually creating pods in CCI.
Installing the Add-on
- Log in to the CCE console.
- Click the name of the target CCE cluster to go to the cluster console.
- In the navigation pane, choose Add-ons.
- Select the CCE Cloud Bursting Engine for CCI add-on and click Install.
- Configure the add-on parameters.
Table 1 Add-on parameters Parameter
Description
Version
Add-on version. There is a mapping between add-on versions and CCE cluster versions. For more details, see "Change History" in CCE Cloud Bursting Engine for CCI.
Specifications
Number of pods required for running the add-on.
- If you select Preset, you can select Single or HA.
- If you select Custom, you can modify the number of replicas, vCPUs, and memory of each add-on component as required.
NOTE:
- The bursting add-on 1.5.2 or later uses more node resources. You need to reserve sufficient pods before upgrading the add-on.
- Single (only one pod for the add-on): There must be a node that has at least seven schedulable pods. If Networking is enabled, eight schedulable pods are required.
- HA (two pods for the add-on): There must be two nodes, each of which must have at least seven schedulable pods, a total of 14 schedulable pods. If Networking is enabled, eight schedulable pods are required on each node, a total of 16 schedulable pods.
- The resource usage of the add-on varies depending on the workloads scaled to CCI. The pods, secrets, ConfigMaps, PVs, and PVCs requested by the services occupy VM resources. You are advised to evaluate the service usage and apply for VMs based on the following specifications: For 1,000 pods and 1,000 ConfigMaps (300 KB), nodes with 2 vCPUs and 4-GiB memory are recommended. For 2,000 pods and 2,000 ConfigMaps, nodes with 4 vCPUs and 8-GiB memory are recommended. For 4,000 pods and 4,000 ConfigMaps, nodes with 8 vCPUs and 16-GiB memory are recommended.
Networking
If this option is enabled, pods in the CCE cluster can communicate with pods in CCI through Services. The component proxy will be automatically deployed upon add-on installation. For details, see Networking.
Creating a Workload
- Log in to the CCE console.
- Click the name of the target CCE cluster to go to the cluster console.
- In the navigation pane, choose Workloads.
- Click Create Workload. For details, see Creating a Workload.
- Specify basic information. Set Burst to CCI to Force scheduling. For more information about scheduling policies, see Scheduling Pods to CCI.
CAUTION:
When you schedule a workload in a CCE cluster to CCI, TCP probes cannot be used for health check.
- Configure the container parameters.
- Click Create Workload.
- On the Workloads page, click the name of the created workload to go to the workload details page.
- View the node where the workload is running. If the workload is running on a CCI node, it has been scheduled to CCI.
Uninstalling the Add-on
- Log in to the CCE console.
- Click the name of the target CCE cluster to go to the cluster console.
- In the navigation pane, choose Add-ons.
- Select the CCE Cloud Bursting Engine for CCI add-on and click Uninstall.
Table 2 Special scenarios for uninstalling the add-on Scenario
Symptom
Description
There are no nodes in the CCE cluster that the bursting add-on needs to be uninstalled from.
Failed to uninstall the bursting add-on.
If the bursting add-on is uninstalled from the cluster, a job for clearing resources will be started in the cluster. To ensure that the job can be started, there is at least one node in the cluster that can be scheduled.
The CCE cluster is deleted, but the bursting add-on is not uninstalled.
There are residual resources in the namespace on CCI. If the resources are not free, additional expenditures will be generated.
The cluster is deleted, but the resource clearing job is not executed. You can manually clear the namespace and residual resources.
For more information about the bursting add-on, see CCE Cloud Bursting Engine for CCI.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot