Updated on 2024-09-29 GMT+08:00

Deploying a Deployment (Nginx)

You can use images to quickly create a single-pod workload that can be accessed from public networks. This section describes how to use CCE to quickly deploy an Nginx application and manage its lifecycle.

Prerequisites

You have created a CCE cluster that contains a node with 4 vCPUs and 8 GiB memory. The node is bound with an EIP.

A cluster is a logical group of cloud servers that run workloads. Each cloud server is a node in the cluster.

For details on how to create a cluster, see Creating a Kubernetes Cluster.

Nginx Overview

Nginx is a lightweight web server. On CCE, you can quickly set up a Nginx web server.

This section uses the Nginx application as an example to describe how to create a workload. The creation takes about 5 minutes.

After Nginx is created, you can access the Nginx web page.

Figure 1 Accessed the Nginx web page

Creating Nginx on the CCE Console

The following is the procedure for creating a containerized workload from a container image.

  1. Log in to the CCE console.
  2. Click the name of the target cluster to access the cluster console.
  3. In the navigation pane, choose Workloads. Then, click Create Workload.
  4. Configure the following parameters and keep the default values for other parameters:

    Basic Info

    • Workload Type: Select Deployment.
    • Workload Name: Set it to nginx.
    • Namespace: Select default.
    • Pods: Set the quantity of pods to 1.

    Container Settings

    In the Container Information area, click Basic Info and click Select Image. In the dialog box displayed, click the Open Source Images tab, search for nginx, and select the nginx image.

    Figure 2 Selecting the nginx image

    Service Settings

    Click the plus sign (+) to create a Service for accessing the workload from an external network. This example shows how to create a LoadBalancer. Configure the following parameters in the window that slides out from the right:

    • Service Name: Enter nginx. The name of the Service is exposed to external networks.
    • Service Type: Select LoadBalancer.
    • Service Affinity: Retain the default value.
    • Load Balancer: If a load balancer is available, select an existing load balancer. If not, select Auto create to create one.
    • Ports:
      • Protocol: Select TCP.
      • Service Port: In this example, set this parameter to 8080. Load balancer will use this port to create a listener and provide an entry for external traffic.
      • Container Port: listening port of the application. In this example, this parameter is set to 80. If another application is used, the container port must be the same as the listening port provided by the application.
    Figure 3 Creating a Service

  5. Click Create Workload.

    Wait until the workload is created.

    The created workload will be displayed on the Deployments tab.

    Figure 4 Workload created successfully

  6. Obtain the external access address of Nginx.

    Click the Nginx workload name to enter its details page. On the page displayed, click the Access Mode tab, view the IP address of Nginx. The public IP address is the external access address.
    Figure 5 Obtaining the external access address

  7. Enter External access address:Service port in the address box of the browser to access the application. The Service port is configured in Ports.

    Figure 6 Accessing Nginx

Creating Nginx Using kubectl

This section describes how to use kubectl to create a Deployment and expose the Deployment to the Internet through a LoadBalancer Service.

  1. Use kubectl to connect to the cluster. For details, see Connecting to a Cluster Using kubectl.
  2. Create a description file named nginx-deployment.yaml. nginx-deployment.yaml is an example file name. You can rename it as required.

    vi nginx-deployment.yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - image: nginx:alpine
            name: nginx
          imagePullSecrets:
          - name: default-secret

  3. Create a Deployment.

    kubectl create -f nginx-deployment.yaml

    If the following information is displayed, the Deployment is being created.

    deployment "nginx" created

    Check the Deployment.

    kubectl get deployment

    If the following information is displayed, the Deployment is running.

    NAME           READY     UP-TO-DATE   AVAILABLE   AGE 
    nginx          1/1       1            1           4m5s

    Parameters:

    • NAME: specifies the name of a workload.
    • READY: indicates the number of available pods/expected pods for the workload.
    • UP-TO-DATE: indicates the number of replicas that have been updated.
    • AVAILABLE: indicates the number of available pods.
    • AGE: indicates the running period of the Deployment.

  4. Create a description file named nginx-elb-svc.yaml. Change the value of selector to that of matchLabels (app: nginx in this example) in the nginx-deployment.yaml file to associate the Service with the backend application.

    For details about the parameters in the following example, see Using kubectl to Create a Service (Automatically Creating a Load Balancer).
    apiVersion: v1 
    kind: Service 
    metadata: 
      annotations:   
        kubernetes.io/elb.class: union
        kubernetes.io/elb.autocreate: 
            '{
                "type": "public",
                "bandwidth_name": "cce-bandwidth",
                "bandwidth_chargemode": "bandwidth",
                "bandwidth_size": 5,
                "bandwidth_sharetype": "PER",
                "eip_type": "5_bgp"
            }'
      labels:
        app: nginx
      name: nginx 
    spec: 
      ports: 
      - name: service0 
        port: 80
        protocol: TCP 
        targetPort: 80
      selector: 
        app: nginx 
      type: LoadBalancer

  5. Create a Service.

    kubectl create -f nginx-elb-svc.yaml

    If information similar to the following is displayed, the Service has been created.

    service/nginx created

    kubectl get svc

    If information similar to the following is displayed, the access type has been configured, and the workload is accessible.

    NAME         TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)        AGE
    kubernetes   ClusterIP      10.247.0.1       <none>        443/TCP        3d
    nginx        LoadBalancer   10.247.130.196   **.**.**.**   80:31540/TCP   51s

  6. Enter the URL (for example, **.**.**.**:80) in the address box of a browser. **.**.**.** indicates the IP address of the load balancer, and 80 indicates the access port displayed on the CCE console.

    Nginx is accessible.

    Figure 7 Accessing Nginx through the LoadBalancer Service