Updated on 2024-04-03 GMT+08:00

Implementation Procedure

Preparing a CCE Workload

  1. Buy a cluster.

    1. Log in to the CCE console, and buy a CCE cluster (VPC network model) or Turbo cluster on the Clusters page. Select CCE Cluster and set Network Model to VPC network. For details, see Buying a CCE Cluster.
    2. After the cluster is created, record the container CIDR block.
    3. Add this CIDR block in the Routes area of a dedicated gateway.
      1. Log in to the APIG console, and choose Gateways in the navigation pane.
      2. Click the gateway name to go to the details page.
      3. Add the container CIDR block in the Routes area.

  2. Create a workload.

    1. On the Clusters page of the CCE console, click the cluster name to go to the details page.
    2. In the navigation pane, choose Workloads.
    3. Click Create Workload. Set Workload Type to Deployment. For details, see the CCE User Guide.

      In the Advanced Settings > Labels and Annotations area, set pod labels for switching the workload and service version. In this example, set app=deployment-demo and version=v1. If you create a workload by importing a YAML file, add pod labels in this file. For details about pod labels, see Pod Labels and Annotations

      Add pod labels in a YAML file:

      spec:
        replicas: 2
        selector:
          matchLabels:
            app: deployment-demo
            version: v1
        template:
          metadata:
            creationTimestamp: null
            labels:
              app: deployment-demo
              version: v1

Method 1: Opening a CCE Workload by Creating a Load Balance Channel

  1. Create a load balance channel.

    1. Go to the APIG console, and choose Gateways in the navigation pane.
    2. Choose API Management > API Policies.
    3. On the Load Balance Channels tab, click Create Load Balance Channel.
      1. Set the basic information.
        Table 1 Basic information parameters

        Parameter

        Description

        Name

        Enter a name that conforms to specific rules to facilitate search. In this example, enter VPC_demo.

        Port

        Container port of a workload for opening services. Set this parameter to 80, which is the default HTTP port.

        Routing Algorithm

        Select WRR. This algorithm will be used to forward requests to each of the cloud servers you select in the order of server weight.

        Type

        Select Microservice.

      2. Configure microservice information.
        Table 2 Microservice configuration

        Parameter

        Description

        Microservice Type

        Cloud Container Engine (CCE) is always selected.

        Cluster

        Select the purchased cluster.

        Namespace

        Select a namespace in the cluster. In this example, select default.

        Workload Type

        Select Deployment. This parameter must be the same as the type of the created workload.

        Service Label Key

        Select the pod label app and its value deployment-demo of the created workload.

        Service Label Value

      3. Configure a server group.
        Table 3 Server group configuration

        Parameter

        Description

        Server Group Name

        Enter server_group_v1.

        Weight

        Enter 1.

        Backend Service Port

        Enter 80. This must be the same as the container port in the workload.

        Description

        Enter "Server group with version v1".

        Tag

        Select the pod label version=v1 of the created workload.

      4. Configure health check.
        Table 4 Health check configuration

        Parameter

        Description

        Protocol

        Default: TCP.

        Check Port

        Backend server port in the channel.

        Healthy threshold

        Default: 2. This is the number of consecutive successful checks required for a cloud server to be considered healthy.

        Unhealthy Threshold

        Default: 5. This is the number of consecutive failed checks required for a cloud server to be considered unhealthy.

        Timeout (s)

        Default: 5. This is the timeout used to determine whether a health check has failed.

        Interval (s)

        Default: 10. This is the interval between consecutive checks.

      5. Click Finish.

        In the load balance channel list, click a channel name to view details.

  2. Open an API.

    1. Create an API group.
      1. Choose API Management > API Groups.
      2. Click Create API Group, and choose Create Directly.
      3. Configure group information and click OK.
    2. Create an API and bind the preceding load balance channel to it.
      1. Click the group name to go to the details page. On the APIs tab, click Create.
      2. Configure frontend information and click Next.
        Table 5 Frontend configuration

        Parameter

        Description

        API Name

        Enter a name that conforms to specific rules to facilitate search.

        Group

        Select the preceding API group.

        URL

        Method: Request method of the API. Set this parameter to ANY.

        Protocol: Request protocol of the API. Set this parameter to HTTPS.

        Subdomain Name: The system automatically allocates a subdomain name to each API group for internal testing. The subdomain name can be accessed 1000 times a day.

        Path: Path for requesting the API.

        Gateway Response

        Select a response to be displayed if the gateway fails to process an API request. Default: default.

        Matching

        Select Prefix match.

        Authentication Mode

        API authentication mode. Select None. (None: Not recommended for actual services. All users will be granted access to the API.)

      3. Configure backend information and click Next.
        Table 6 Parameters for defining an HTTP/HTTPS backend service

        Parameter

        Description

        Load Balance Channel

        Determine whether the backend service will be accessed using a load balance channel. For this example, select Configure.

        URL

        Method: Request method of the API. Set this parameter to ANY.

        Protocol: Set this parameter to HTTP.

        Load Balance Channel: Select the created channel.

        Path: Path of the backend service.

      4. Define the response and click Finish.
    3. Debug the API.

      On the APIs tab, click Debug. Click the Debug button in red background. If the status code 200 is returned in the response result, the debugging is successful. Then go to the next step. Otherwise, rectify the error indicated in the error message.

    4. Publish the API.

      On the APIs tab, click Publish, retain the default option RELEASE, and click OK. When the exclamation mark in the upper left of the Publish button disappears, the publishing is successful. Then go to the next step. Otherwise, rectify the error indicated in the error message.

  3. Call the API.

    1. Bind independent domain names to the group of this API.

      On the group details page, click the Group Information tab. The debugging domain name is only used for development and testing and can be accessed 1000 times a day. Bind independent domain names to expose APIs in the group.

      Click Bind Independent Domain Name to bind registered public domain names. For details, see Binding a Domain Name.

    2. Copy the URL of the API.

      On the APIs tab, copy the API URL. Open a browser and enter the URL. When the defined success response is displayed, the invocation is successful.

      Figure 1 Copying the URL

      Now, the CCE workload is opened by creating a load balance channel.

Method 2: Opening a CCE Workload by Importing It

  1. Import a CCE workload.

    1. Go to the APIG console, and choose Gateways in the navigation pane.
    2. Choose API Management > API Groups.
    3. Choose Create API Group > Import CCE Workload.
      1. Enter information about the CCE workload to import.
        Table 7 Workload information

        Parameter

        Description

        Group

        Default: New group.

        Cluster

        Select the purchased cluster.

        Namespace

        Select a namespace in the cluster. In this example, select default.

        Workload Type

        Select Deployment. This parameter must be the same as the type of the created workload.

        Service Label Key

        Select the pod label app and its value deployment-demo of the created workload.

        Service Label Value

        Tag

        Another pod label version=v1 of the workload is automatically selected.

      2. Configure API information.
        Table 8 API information

        Parameter

        Description

        Protocol

        API request protocol. HTTPS is selected by default.

        Request Path

        API request path for prefix match. Default: /. In this example, retain the default value.

        Port

        Enter 80. This must be the same as the container port in the workload.

        Authentication Mode

        Default: None.

        CORS

        Disabled by default.

        Timeout (ms)

        Backend timeout. Default: 5000.

    4. Click OK. The CCE workload is imported, with an API group, API, and load balance channel generated.

  2. View the generated API and load balance channel.

    1. View the generated API.
      1. Click the API group name, and then view the API name, request method, and publishing status on the APIs tab.
      2. Click the Backend Configuration tab and view the bound load balance channel.
    2. View the generated load balance channel.
      1. Choose API Management > API Policies.
      2. On the Load Balance Channels tab, click the channel name to view details.
    3. Check that this load balance channel is the one bound to the API, and then go to the next step. If it is not, repeat 1.

  3. Open the API.

    Since importing a CCE workload already creates an API group and API, you only need to publish the API in an environment.
    1. Debug the API.

      On the APIs tab, click Debug. Click the Debug button in red background. If the status code 200 is returned in the response result, the debugging is successful. Then go to the next step.

    2. Publish the API.

      On the APIs tab, click Publish, retain the default option RELEASE, and click OK. When the exclamation mark in the upper left of the Publish button disappears, the publishing is successful. Then go to the next step.

  4. Call the API.

    1. Bind independent domain names to the group of this API.

      On the group details page, click the Group Information tab. The debugging domain name is only used for development and testing and can be accessed 1000 times a day. Bind independent domain names to expose APIs in the group.

      Click Bind Independent Domain Name to bind registered public domain names. For details, see Binding a Domain Name.

    2. Copy the URL of the API.

      On the APIs tab, copy the API URL. Open a browser and enter the URL. When the defined success response is displayed, the invocation is successful.

      Figure 2 Copying the URL

      Now, the CCE workload has been opened by importing it.

(Optional) Configuring Workload Labels for Grayscale Release

Grayscale release is a service release policy that gradually switches traffic from an early version to a later version by specifying the traffic distribution weight. Services are verified during release and upgrade. If a later version meets the expectation, you can increase the traffic percentage of this version and decrease that of the early version. Repeat this process until a later version accounts for 100% and an early version is down to 0. Then the traffic is successfully switched to the later version.

Figure 3 Grayscale release principle

CCE workloads are configured using the pod label selector for grayscale release. You can quickly roll out and verify new features, and switch servers for traffic processing. For details, see Using Services to Implement Simple Grayscale Release and Blue-Green Deployment.

The following describes how to smoothly switch traffic from V1 to V2 through grayscale release.

  1. Create a workload, set a pod label with the same value as the app label of the preceding workload. For details, see the preceding workload.

    On the workload creation page, go to the Advanced Settings > Labels and Annotations area, and set app=deployment-demo and version=v2. If you create a workload by importing a YAML file, add pod labels in this file.

  2. For the server group with pod label version=v1, adjust the traffic weight.

    1. On the APIG console, choose Gateways in the navigation pane.
    2. Choose API Management > API Policies.
    3. On the Load Balance Channels tab, click the name of the created channel.
    4. In the Backend Server Address area, click Modify.
    5. Change the weight to 100, and click OK.

      Weight is the percentage of traffic to be forwarded. All traffic will be forwarded to the pod IP addresses in server group server_group_v1.

  3. Create a server group with pod label version=v2, then set the traffic weight.

    1. In the Backend Server Address area, click Create Server Group.
      Table 9 Server group configuration

      Parameter

      Description

      Server Group Name

      Enter server_group_v2.

      Weight

      Enter 1.

      Backend Service Port

      Enter 80.

      Tag

      Select pod label version=v2.

    2. Click OK.

  4. Refresh the backend server addresses.

    Refresh the page for the backend server addresses. The load balance channel automatically monitors the pod IP addresses of the workload and dynamically adds the addresses as backend server addresses. As shown in the following figure, tags app=deployment-demo and version=v2 automatically match the pod IP addresses (backend server addresses) of the workload.

    Figure 4 Pod IP addresses automatically matched

    100 of 101 (server group weight of total weight) traffic is distributed to server_group_v1, and the remaining to the later version of server_group_v2.

    Figure 5 Click Modify in the upper right of the page.

  5. Check that the new features released to V2 through grayscale release are running stably.

    If the new version meets the expectation, go to 6. Otherwise, the new feature release fails.

  6. Adjust the weights of server groups for different versions.

    Gradually decrease the weight of server_group_v1 and increase that of server_group_v2. Repeat 5 to 6 until the weight of server_group_v1 becomes 0 and that of server_group_v2 reaches 100.

    As shown in the preceding figure, all requests are forwarded to server_group_v2. New features are switched from workload deployment-demo of version=v1 to deployment-demo2 of version=v2 through grayscale release. (You can adjust the traffic weight to meet service requirements.)

  7. Delete the backend server group server_group_v1 of version=v1.

    Now all traffic has been switched to the backend server group of version=v2. You can delete the server group of version=v1.

    1. Go to the load balance channel details page on the APIG console, delete all IP addresses of the server group of version=v1 in the Backend Server Address area.
    2. Click Delete on the right of this area to delete the server group of version=v1.

      The backend server group server_group_v2 of version=v2 is kept.