Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Situation Awareness
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive
Help Center/ Cloud Container Engine/ Best Practices/ Release/ Using Services to Implement Simple Grayscale Release and Blue-Green Deployment

Using Services to Implement Simple Grayscale Release and Blue-Green Deployment

Updated on 2025-01-08 GMT+08:00

To implement grayscale release for a CCE cluster, deploy other open-source tools, such as Nginx Ingress, to the cluster or deploy services to a service mesh. These solutions are difficult to implement. If your grayscale release requirements are simple and you do not want to introduce too many plug-ins or complex configurations, you can refer to this section to implement simple grayscale release and blue-green deployment based on native Kubernetes features.

Principles

Users usually use Kubernetes objects such as Deployments and StatefulSets to deploy services. Each workload manages a group of pods. The following figure uses Deployment as an example.

Generally, a Service is created for each workload. The Service uses the selector to match the backend pod. Other Services or objects outside the cluster can access the pods backing the Service. If a pod needs to be exposed, set the Service type to LoadBalancer. The ELB load balancer functions as the traffic entrance.

  • Grayscale release principles

    Take a Deployment as an example. A Service, in most cases, will be created for each Deployment. However, Kubernetes does not require that Services and Deployments correspond to each other. A Service uses a selector to match backend pods. If pods of different Deployments are selected by the same selector, a Service corresponds to multiple versions of Deployments. You can adjust the number of replicas of Deployments of different versions to adjust the weights of services of different versions to achieve grayscale release. The following figure shows the process:

  • Blue-green deployment principles

    Take a Deployment as an example. Two Deployments of different versions have been deployed in the cluster, and their pods are labeled with the same key but different values to distinguish versions. A Service uses the selector to select the pod of a Deployment of a version. In this case, you can change the value of the label that determines the version in the Service selector to change the pod backing the Service. In this way, you can directly switch the service traffic from one version to another. The following figure shows the process:

Prerequisites

The Nginx image has been uploaded to SWR. The Nginx images have two versions: v1 and v2. The welcome pages are Nginx-v1 and Nginx-v2.

Resource Creation

You can use YAML to deploy Deployments and Services in either of the following ways:

  • On the Create Deployment page, click Create YAML on the right and edit the YAML file in the window.
  • Save the sample YAML file in this section as a file and use kubectl to specify the YAML file. For example, run the kubectl create -f xxx.yaml command.

Step 1: Deploy Services of Two Versions

Two versions of Nginx services are deployed in the cluster to provide external access through ELB.

  1. Create a Deployment of the first version. The following uses nginx-v1 as an example. Example YAML:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-v1
    spec:
      replicas: 2               # Number of replicas of the Deployment, that is, the number of pods
      selector:                 # Label selector
        matchLabels:
          app: nginx
          version: v1
      template:
        metadata:
          labels:               # Pod label
            app: nginx
            version: v1
        spec:
          containers:
          - image: {your_repository}/nginx:v1 # The image used by the container is nginx:v1.
            name: container-0
            resources:
              limits:
                cpu: 100m
                memory: 200Mi
              requests:
                cpu: 100m
                memory: 200Mi
          imagePullSecrets:
          - name: default-secret

  2. Create a Deployment of the second version. The following uses nginx-v2 as an example. Example YAML:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx-v2
    spec:
      replicas: 2               # Number of replicas of the Deployment, that is, the number of pods
      selector:                 # Label selector
        matchLabels:
          app: nginx
          version: v2
      template:
        metadata:
          labels:               # Pod label
            app: nginx
            version: v2
        spec:
          containers:
          - image: {your_repository}/nginx:v2   # The image used by the container is nginx:v2.
            name: container-0
            resources:
              limits:
                cpu: 100m
                memory: 200Mi
              requests:
                cpu: 100m
                memory: 200Mi
          imagePullSecrets:
          - name: default-secret

    You can log in to the CCE console to view the deployment status.

Step 2: Implement Grayscale Release

  1. Create a LoadBalancer Service for the Deployment. Do not specify the version in the selector. Enable the Service to select the pods of the Deployments of two versions. Example YAML:

    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        kubernetes.io/elb.id: 586c97da-a47c-467c-a615-bd25a20de39c    # ID of the ELB load balancer. Replace it with the actual value.
      name: nginx
    spec: 
      ports:
      - name: service0
        port: 80
        protocol: TCP
        targetPort: 80
      selector:             # The selector does not contain version information.
        app: nginx
       type: LoadBalancer   # Service type (LoadBalancer)

  2. Run the following command to test the access:

    for i in {1..10}; do curl <EXTERNAL_IP>; done;

    <EXTERNAL_IP> indicates the IP address of the ELB load balancer.

    The command output is as follows (Half of the responses are from the Deployment of version v1, and the other half are from version v2):

    Nginx-v2
    Nginx-v1
    Nginx-v1
    Nginx-v1
    Nginx-v2
    Nginx-v1
    Nginx-v2
    Nginx-v1
    Nginx-v2
    Nginx-v2

  3. Use the console or kubectl to adjust the number of replicas of the Deployments. Change the number of replicas to 4 for v1 and 1 for v2.

    kubectl scale deployment/nginx-v1 --replicas=4

    kubectl scale deployment/nginx-v2 --replicas=1

  4. Run the following command to test the access again:

    for i in {1..10}; do curl <EXTERNAL_IP>; done;

    <EXTERNAL_IP> indicates the IP address of the ELB load balancer.

    In the command output, among the 10 access requests, only two responses are from the v2 version. The response ratio of the v1 and v2 versions is the same as the ratio of the number of replicas of the v1 and v2 versions, that is, 4:1. Grayscale release is implemented by controlling the number of replicas of services of different versions.

    Nginx-v1
    Nginx-v1
    Nginx-v1
    Nginx-v1
    Nginx-v2
    Nginx-v1
    Nginx-v2
    Nginx-v1
    Nginx-v1
    Nginx-v1
    NOTE:

    If the ratio of v1 to v2 is not 4:1, you can set the number of access times to a larger value, for example, 20. Theoretically, the more the times, the closer the response ratio between v1 and v2 is to 4:1.

Step 3: Implement Blue-Green Deployment

  1. Create a LoadBalancer Service for a deployed Deployment and specify that the v1 version is used. Example YAML:

    apiVersion: v1
    kind: Service
    metadata:
      annotations:
        kubernetes.io/elb.id: 586c97da-a47c-467c-a615-bd25a20de39c    # ID of the ELB load balancer. Replace it with the actual value.
      name: nginx
    spec:
      ports:
      - name: service0
        port: 80
        protocol: TCP
        targetPort: 80
      selector:             # Set the version to v1 in the selector.
        app: nginx
        version: v1
      type: LoadBalancer    # Service type (LoadBalancer)

  2. Run the following command to test the access:

    for i in {1..10}; do curl <EXTERNAL_IP>; done;

    <EXTERNAL_IP> indicates the IP address of the ELB load balancer.

    The command output is as follows (all responses are from the v1 version):

    Nginx-v1
    Nginx-v1
    Nginx-v1
    Nginx-v1
    Nginx-v1
    Nginx-v1
    Nginx-v1
    Nginx-v1
    Nginx-v1
    Nginx-v1

  3. Use the console or kubectl to modify the selector of the Service so that the v2 version is selected.

    kubectl patch service nginx -p '{"spec":{"selector":{"version":"v2"}}}'

  4. Run the following command to test the access again:

    for i in {1..10}; do curl <EXTERNAL_IP>; done;

    <EXTERNAL_IP> indicates the IP address of the ELB load balancer.

    The returned results show that are all responses are from the v2 version. The blue-green deployment is successfully implemented.

    Nginx-v2
    Nginx-v2
    Nginx-v2
    Nginx-v2
    Nginx-v2
    Nginx-v2
    Nginx-v2
    Nginx-v2
    Nginx-v2
    Nginx-v2

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback