- What's New
- Function Overview
- Service Overview
- Billing
- Getting Started
- User Guide
- Best Practices
-
Developer Guide
- Overview
- Using Native kubectl (Recommended)
- Namespace and Network
- Pod
- Label
- Deployment
- EIPPool
- EIP
- Pod Resource Monitoring Metric
- Collecting Pod Logs
- Managing Network Access Through Service and Ingress
- Using PersistentVolumeClaim to Apply for Persistent Storage
- ConfigMap and Secret
- Creating a Workload Using Job and Cron Job
- YAML Syntax
-
API Reference
- Before You Start
- Calling APIs
- Getting Started
- Proprietary APIs
-
Kubernetes APIs
- ConfigMap
- Pod
- StorageClass
- Service
-
Deployment
- Querying All Deployments
- Deleting All Deployments in a Namespace
- Querying Deployments in a Namespace
- Creating a Deployment
- Deleting a Deployment
- Querying a Deployment
- Updating a Deployment
- Replacing a Deployment
- Querying the Scaling Operation of a Specified Deployment
- Updating the Scaling Operation of a Specified Deployment
- Replacing the Scaling Operation of a Specified Deployment
- Querying the Status of a Deployment
- Ingress
- OpenAPIv2
- VolcanoJob
- Namespace
- ClusterRole
- Secret
- Endpoint
- ResourceQuota
- CronJob
-
API groups
- Querying API Versions
- Querying All APIs of v1
- Querying an APIGroupList
- Querying APIGroup (/apis/apps)
- Querying APIs of apps/v1
- Querying an APIGroup (/apis/batch)
- Querying an APIGroup (/apis/batch.volcano.sh)
- Querying All APIs of batch.volcano.sh/v1alpha1
- Querying All APIs of batch/v1
- Querying All APIs of batch/v1beta1
- Querying an APIGroup (/apis/crd.yangtse.cni)
- Querying All APIs of crd.yangtse.cni/v1
- Querying an APIGroup (/apis/extensions)
- Querying All APIs of extensions/v1beta1
- Querying an APIGroup (/apis/metrics.k8s.io)
- Querying All APIs of metrics.k8s.io/v1beta1
- Querying an APIGroup (/apis/networking.cci.io)
- Querying All APIs of networking.cci.io/v1beta1
- Querying an APIGroup (/apis/rbac.authorization.k8s.io)
- Querying All APIs of rbac.authorization.k8s.io/v1
- Event
- PersistentVolumeClaim
- RoleBinding
- StatefulSet
- Job
- ReplicaSet
- Data Structure
- Permissions Policies and Supported Actions
- Appendix
- Out-of-Date APIs
- Change History
-
FAQs
- Product Consulting
-
Basic Concept FAQs
- What Is CCI?
- What Are the Differences Between Cloud Container Instance and Cloud Container Engine?
- What Is an Environment Variable?
- What Is a Service?
- What Is Mcore?
- What Are the Relationships Between Images, Containers, and Workloads?
- What Are Kata Containers?
- Can kubectl Be Used to Manage Container Instances?
- What Are Core-Hours in CCI Resource Packages?
- Workload Abnormalities
-
Container Workload FAQs
- Why Service Performance Does Not Meet the Expectation?
- How Do I Set the Quantity of Instances (Pods)?
- How Do I Check My Resource Quotas?
- How Do I Set Probes for a Workload?
- How Do I Configure an Auto Scaling Policy?
- What Do I Do If the Workload Created from the sample Image Fails to Run?
- How Do I View Pods After I Call the API to Delete a Deployment?
- Why an Error Is Reported When a GPU-Related Operation Is Performed on the Container Entered by Using exec?
- Can I Start a Container in Privileged Mode When Running the systemctl Command in a Container in a CCI Cluster?
- Why Does the Intel oneAPI Toolkit Fail to Run VASP Tasks Occasionally?
- Why Are Pods Evicted?
- Why Is the Workload Web-Terminal Not Displayed on the Console?
- Why Are Fees Continuously Deducted After I Delete a Workload?
-
Image Repository FAQs
- Can I Export Public Images?
- How Do I Create a Container Image?
- How Do I Upload Images?
- Does CCI Provide Base Container Images for Download?
- Does CCI Administrator Have the Permission to Upload Image Packages?
- What Permissions Are Required for Uploading Image Packages for CCI?
- What Do I Do If Authentication Is Required During Image Push?
-
Network Management FAQs
- How Do I View the VPC CIDR Block?
- Does CCI Support Load Balancing?
- How Do I Configure the DNS Service on CCI?
- Does CCI Support InfiniBand (IB) Networks?
- How Do I Access a Container from a Public Network?
- How Do I Access a Public Network from a Container?
- What Do I Do If Access to a Workload from a Public Network Fails?
- What Do I Do If Error 504 Is Reported When I Access a Workload?
- What Do I Do If the Connection Timed Out?
- Storage Management FAQs
- Log Collection
- Account
- SDK Reference
- Videos
- General Reference
Copied.
Performing Graceful Rolling Upgrade for CCI Applications
Scenario
When you deploy a workload in CCI to run an application, the application is exposed as a LoadBalancer Service or ingress, and connected to a dedicated ELB load balancer to allow access traffic to reach the containers directly. When rolling upgrade or auto scaling is performed on the application, your pods may fail to work with ELB and 5xx errors may occur. This section guides you to configure container probes and readiness time to achieve graceful upgrade and auto scaling.
Procedure
The following uses an Nginx Deployment as an example.
- On the CCI console, choose Workloads > Deployments in the navigation pane, and click Create from Image in the upper right corner.
Figure 1 Creating a Deployment
- In the Container Settings area, click Use Image to select an image.
- Click Advanced Settings of the image, click Health Check > Application Readiness Probe, and configure the probe.
Figure 2 Configuring the application readiness probe
NOTE:
The probe checks whether your container is ready. If the container is not ready, requests will not be forwarded to the container.
- Expand Lifecycle and configure the parameters of Pre-Stop Processing for the container.
Figure 3 Configuring lifecycle parameters
NOTE:
This configuration ensures that the container can provide services for external systems during its exit.
- Click Next: Configure Access Settings and configure settings as shown in Figure 4.
- Click Next and complete the Deployment creation.
- Configure the minimum readiness time.
A pod is considered available only when the minimum readiness time is exceeded without any of its containers crashing.
In the upper right corner of the Deployments page, click Create YAML to configure the minimum readiness time as below.Figure 5 Configuring the minimum readiness timeNOTE:
- The recommended value of minReadySeconds is the expected time for starting the service container plus the duration from the time when the ELB service delivers the member to the time when the member takes effect.
- The value of minReadySeconds must be smaller than that of sleep to ensure that the new container is ready before the old container stops and exits.
- Test the application upgrade and auto scaling.
Prepare a client outside the cluster, and configure the detection script detection_script.sh with the following content (100.85.125.90:7552 indicates the public network address for accessing the Service):
#! /bin/bash for (( ; ; )) do curl -I 100.85.125.90:7552 | grep "200 OK" if [ $? -ne 0 ]; then echo "response error!" exit 1 fi done
- Run the detection script (bash detection_script.sh) and trigger the rolling upgrade of the application on the CCI console. You can change the specifications of the container to trigger the rolling upgrade of the application.
Figure 6 Modifying container specifications
If the access to the application is not interrupted, and the returned responses are all 200OK, the graceful upgrade is successfully triggered.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot