Help Center/ Cloud Container Engine/ Getting Started/ Deploying an NGINX Deployment in a CCE Cluster
Updated on 2024-09-29 GMT+08:00

Deploying an NGINX Deployment in a CCE Cluster

Deployments are a type of workload in Kubernetes. They are ideal for applications that do not require data consistency and durability, such as web and application servers. Each pod in a Deployment is independent of the others, and there is no difference in running status between them. This means that if one pod fails, requests can be redirected to other healthy pods, ensuring uninterrupted services. Deployment pods are also independent from one another and can be replaced with new ones. You can easily adjust the number of pods based on real-time service requirements, such as adding more during peak hours to handle increased traffic.

This section uses the lightweight web server NGINX as an example to describe how to deploy a Deployment in a CCE cluster.

Procedure

Step

Description

Preparations

Register a Huawei account and top up the account.

Step 1: Enable CCE for the First Time and Perform Authorization

Obtain the required permissions for your account when you use the CCE service in the current region for the first time.

Step 2: Create a Cluster

Create a CCE cluster to provide Kubernetes services.

Step 3: Create a Node Pool and Nodes in the Cluster

Create a node in the cluster to run your containerized applications.

Step 4: Create a Workload and Access It

Create a workload in the cluster to run your containers and create a Service for the workload to enable Internet access.

Follow-up Operations: Releasing Resources

To avoid additional charges, delete the cluster resources promptly if you no longer require them after practice.

Preparations

Step 1: Enable CCE for the First Time and Perform Authorization

CCE works closely with multiple cloud services to support computing, storage, networking, and monitoring functions. When you log in to the CCE console for the first time, CCE automatically requests permissions to access those cloud services in the region where you run your applications. If you have been authorized in the current region, skip this step.

  1. Log in to the CCE console using your HUAWEI ID.
  2. Click in the upper left corner on the displayed page and select a region.
  3. When you log in to the CCE console in a region for the first time, wait for the Authorization Statement dialog box to appear, carefully read the statement, and click OK.

    After you agree to delegate the permissions, CCE creates an agency named cce_admin_trust in IAM to perform operations on other cloud resources and grants it the Tenant Administrator permissions. Tenant Administrator has the permissions on all cloud services except IAM. The permissions are used to call the cloud services on which CCE depends. The delegation takes effect only in the current region. You can go to the IAM console, choose Agencies, and click cce_admin_trust to view the delegation records of each region. For details, see Account Delegation.

    CCE may fail to run as expected if the Tenant Administrator permissions are not assigned. Therefore, do not delete or modify the cce_admin_trust agency when using CCE.

Step 2: Create a Cluster

  1. Log in to the CCE console.

    • If you have no clusters, click Buy Cluster on the wizard page.
    • If you have CCE clusters, choose Clusters in the navigation pane, click Buy Cluster in the upper right corner.

  2. Configure basic cluster parameters.

    Only mandatory parameters are described in this example. You can keep the default values for most other parameters. For details about the parameter configurations, see Buying a CCE Standard/Turbo Cluster.

    Parameter

    Example

    Description

    Type

    CCE Standard Cluster

    CCE allows you to create various types of clusters for diverse needs. It provides highly reliable, secure, business-class container services.

    You can select CCE Standard Cluster or CCE Turbo Cluster as required.

    • CCE standard clusters provide highly reliable, secure, business-class containers.
    • CCE Turbo clusters use high-performance cloud native networks and provide cloud native hybrid scheduling. Such clusters have improved resource utilization and can be used in more scenarios.

    For details about cluster types, see Comparison Between Cluster Types.

    Billing Mode

    Pay-per-use

    Select a billing mode for the cluster.

    • Yearly/Monthly: a prepaid billing mode. Resources will be billed based on the service duration. This cost-effective mode is ideal when the duration of resource usage is predictable.

      If you choose this billing mode, you will need to set the desired duration and decide whether to enable automatic subscription renewal. Monthly subscriptions renew automatically every month, while yearly subscriptions renew automatically every year.

    • Pay-per-use: a postpaid billing mode. It is suitable for scenarios where resources will be billed based on usage frequency and duration. You can provision or delete resources at any time.

    For details, see Billing Modes.

    Cluster Name

    cce-test

    Name of the cluster to be created

    Enterprise Project

    default

    Enterprise projects facilitate project-level management and grouping of cloud resources and users. For more details, see Enterprise Management.

    This parameter is displayed only for enterprise users who have enabled Enterprise Project Management.

    Cluster Version

    The recommended version, for example, v1.29

    Select the latest commercial release for improved stability, reliability, new functionalities. CCE offers various versions of Kubernetes software.

    Cluster Scale

    Nodes: 50

    Configure the parameter as required. This parameter controls the maximum number of worker nodes that the cluster can manage. After the cluster is created, it can only be scaled out.

    Master Nodes

    3 Masters

    Select the number of master nodes. The master nodes are automatically hosted by CCE and deployed with Kubernetes cluster management components such as kube-apiserver, kube-controller-manager, and kube-scheduler.

    • 3 Masters: Three master nodes will be created for high cluster availability.
    • Single: Only one master node will be created in your cluster.

    This parameter cannot be changed after the cluster is created.

  3. Configure network parameters.

    Parameter

    Example

    Description

    VPC

    vpc-cce

    Select a VPC for the cluster.

    If no VPC is available, click Create VPC to create one. After the VPC is created, click the refresh icon. For details about how to create a VPC, see Creating a VPC and Subnet.

    Node Subnet

    subnet-cce

    Select a subnet. Nodes in the cluster are assigned with the IP addresses in the subnet.

    Network Model

    VPC network

    Select VPC network or Tunnel network. By default, the VPC network model is selected.

    For details about the differences between different container network models, see Container Network.

    Container CIDR Block

    10.0.0.0/16

    Configure the CIDR block used by containers. It controls how many pods can run in the cluster.

    Service CIDR Block

    10.247.0.0/16

    Configure the ClusterIP CIDR block for the cluster. It controls how many Services can be created in the cluster and cannot be changed after configuration.

  4. Click Next: Select Add-on. On the page displayed, select the add-ons to be installed during cluster creation.

    This example only includes the mandatory add-ons that are automatically installed.

  5. Click Next: Add-on Configuration. There is no need to set up the add-ons that are installed by default.
  6. Click Next: Confirm configuration, confirm the resources on the page displayed, and click Submit.

    Wait until the cluster is created. It takes about 5 to 10 minutes to create a cluster.

    The created cluster will be displayed on the Clusters page, and there are zero nodes in it.

    Figure 1 Cluster created

Step 3: Create a Node Pool and Nodes in the Cluster

  1. Log in to the CCE console and click the cluster name to access the cluster console.
  2. In the navigation pane, choose Nodes. On the Node Pools tab, click Create Node Pool in the upper right corner.
  3. Configure the node pool parameters.

    Only mandatory parameters are described in this example. You can keep the default values for most other parameters. For details about the configuration parameters, see Creating a Node Pool.

    Parameter

    Example

    Description

    Node Type

    Elastic Cloud Server (VM)

    Select a node type based on service requirements. Then, the available node flavors will be automatically displayed in the Specifications area for you to select.

    Specifications

    4 vCPUs | 8 GiB

    Select a node flavor that best fits your service needs.

    For optimal performance of the cluster components, you are advised to set up the node with a minimum of 4 vCPUs and 8 GiB of memory.

    Container Engine

    containerd

    Select a container engine based on service requirements. For details about the differences between container engines, see Container Engines.

    OS

    Huawei Cloud EulerOS 2.0

    Select an OS for the node.

    Login Mode

    A custom password

    • Password: Enter a password for logging in to the node and confirm the password. The default username is root.

      Keep the password secure. If you forget the password, the system is unable to retrieve it.

    • Key Pair: Select a key pair for logging to the node and select the check box to acknowledge that you have obtained the key file and without this file you will not be able to log in to the node.

      A key pair is used for identity authentication when you remotely log in to a node. If no key pair is available, click Create Key Pair to create one. For details, see Creating a Key Pair on the Management Console.

  4. Configure parameters in Storage Settings and Network Settings. In this example, you can keep the default values for the parameters. You only need to select I have confirmed that the security group rules have been correctly configured for nodes to communicate with each other. and click Next: Confirm.

  5. Check the node specifications, read the instructions on the page, and click Submit.
  6. Locate the row containing the target node pool and click Scaling. There are zero nodes in the created node pool by default.

  7. Set the number of nodes to be added to 2, which means two more nodes will be created in the node pool.

  8. Wait until the nodes are created. It takes about 5 to 10 minutes to complete the node creation.

Step 4: Create a Workload and Access It

You can deploy a workload using the console or kubectl. This section uses an NGINX image as an example.

  1. In the navigation pane, choose Workloads. Then, click Create Workload in the upper right corner.
  2. Configure the basic information about the workload.

    In this example, configure the following parameters and keep the default values for other parameters. (For details about the configuration parameters, see Creating a Deployment.)

    Parameter

    Example

    Description

    Workload Type

    Deployment

    In Kubernetes clusters, a workload refers to an application that is currently running. There are various built-in workloads available, each designed for different functions and application scenarios. For details about workload types, see Workloads.

    Workload Name

    nginx

    Enter a workload name.

    Namespace

    default

    In a Kubernetes cluster, a namespace is a conceptual grouping of resources or objects. Each namespace provides isolation for data from other namespaces.

    After a cluster is created, a namespace named default is generated by default. You can directly use the namespace.

    Pods

    1

    Enter the number of pods.

  3. Configure container parameters.

    Configure the following parameters and keep the default values for other parameters.

    Parameter

    Example

    Description

    Image Name

    The nginx image of the latest version

    Click Select Image. In the displayed dialog box, click the Open Source Images tab and select a public image.

    CPU Quota

    Request: 0.25 cores; Limit: 0.25 cores

    • Request: Enter the number of CPUs pre-allocated to the container. The default value is 0.25 cores.
    • Limit: Enter the maximum number of CPUs that can be used by the container. The default value is the same as that of the resource request. If the resource limit is greater than the resource request, it indicates that the pre-allocated resource limit can be temporarily exceeded in burst scenarios.

    For details, see Configuring Container Specifications.

    Memory Quota

    Request: 512 MiB; Limit: 512 MiB

    • Request: Enter the number of memory resources pre-allocated to the container. The default value is 512 MiB.
    • Limit: Enter the maximum number of memory resources that can be used by the container. The default value is the same as that of the resource request. If the resource limit is greater than the resource request, it indicates that the pre-allocated resource limit can be temporarily exceeded in burst scenarios.

    For details, see Configuring Container Specifications.

  4. Configure access settings.

    In the Service Settings area, click the plus sign (+) and create a Service for accessing the workload from external networks. This example shows how to create a LoadBalancer Service. You can configure the following parameters in the window that slides out from the right.

    Parameter

    Example

    Description

    Service Name

    nginx

    Enter a Service name.

    Service Type

    LoadBalancer

    Select a Service type, which refers to the Service access mode. For details about the differences between Service types, see Service.

    Load Balancer

    • Dedicated
    • AZ: at least one AZ, for example, AZ1
    • EIP: Auto create

    Keep the default values for other parameters.

    Select Use existing if there is one.

    If no load balancer is available, select Auto create to create one and bind an EIP to it. For details about the parameters, see Creating a LoadBalancer Service.

    Ports

    • Protocol: TCP
    • Container Port: 80
    • Service Port: 8080
    • Protocol: Select a protocol for the load balancer listener.
    • Container Port: Enter the listening port of the containerized application. The value must be the same as the listening port provided by the application for external systems. If the nginx image is used, set this parameter to 80.
    • Service Port: Enter a custom port. Load balancer will use this port to create a listener and provide an entry for external traffic. You can customize the port for external access.

  5. Click Create Workload.

    Wait until the workload is created. The created workload will be displayed on the Deployments tab.

    Figure 2 Workload created

  6. Obtain the external access address of Nginx.

    Click the name of the nginx workload to go to its details page. On the page displayed, click the Access Mode tab, view the IP address of nginx. The public IP address is the external access address.
    Figure 3 Obtaining the external access address

  7. In the address box of a browser, enter {External access address:Service port} to access the workload. The value of {Service port} is the same as the service port specified in 4. In this example, the value is 8080.

    Figure 4 Accessing nginx

If you use kubectl to access the cluster, prepare an ECS that has been bound with an EIP in the same VPC as the cluster.

  1. Log in to the target ECS. For details, see Logging In to a Linux ECS.
  2. Install kubectl on the ECS.

    You can check whether kubectl has been installed by running kubectl version. If kubectl has been installed, you can skip this step.

    The Linux environment is used as an example to describe how to install and configure kubectl. For more installation methods, see kubectl.

    1. Download kubectl.
      cd /home
      curl -LO https://dl.k8s.io/release/{v1.29.0}/bin/linux/amd64/kubectl

      {v1.29.0} specifies the version. You can replace it as required.

    2. Install kubectl.
      chmod +x kubectl
      mv -f kubectl /usr/local/bin

  3. Configure a credential for kubectl to access the Kubernetes cluster.

    1. Log in to the CCE console and click the cluster name to access the cluster console. Choose Overview in the navigation pane.
    2. On the cluster overview page, locate the Connection Info area. Click Configure next to kubectl and view the kubectl connection information.
    3. In the window that slides out from the right, locate the Download the kubeconfig file. area, select Intranet access for Current data, and download the corresponding configuration file.
    4. Log in to the VM where the kubectl client has been installed and copy and paste the configuration file (for example, kubeconfig.yaml) downloaded in the previous step to the /home directory.
    5. Save the kubectl authentication file to the configuration file in the $HOME/.kube directory.
      cd /home
      mkdir -p $HOME/.kube
      mv -f kubeconfig.yaml $HOME/.kube/config
    6. Run the kubectl command to see whether the cluster can be accessed.

      For example, to view the cluster information, run the following command:

      kubectl cluster-info

      Information similar to the following is displayed:

      Kubernetes master is running at https://*.*.*.*:5443
      CoreDNS is running at https://*.*.*.*:5443/api/v1/namespaces/kube-system/services/coredns:dns/proxy
      To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

  4. Create a YAML file named nginx-deployment.yaml. nginx-deployment.yaml is an example file name. You can rename it as required.

    vi nginx-deployment.yaml

    The file content is as follows:

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - image: nginx:alpine
            name: nginx
          imagePullSecrets:
          - name: default-secret

  5. Run the following command to deploy the workload:

    kubectl create -f nginx-deployment.yaml

    If information similar to the following is displayed, the workload is being created:

    deployment "nginx" created

  6. Run the following command to check the workload status:

    kubectl get deployment

    If information similar to the following is displayed, the workload has been created:

    NAME           READY     UP-TO-DATE   AVAILABLE   AGE 
    nginx          1/1       1            1           4m5s

    The parameters in the command output are described as follows:

    • NAME: specifies the name of a workload.
    • READY: indicates the number of available pods/expected pods for the workload.
    • UP-TO-DATE: specifies the number of pods that have been updated for the workload.
    • AVAILABLE: specifies the number of pods available for the workload.
    • AGE: specifies how long the workload has run.

  7. Create a YAML file named nginx-elb-svc.yaml and change the value of selector to that of matchLabels (app: nginx in this example) in the nginx-deployment.yaml file to associate the Service with the backend application.

    vi nginx-elb-svc.yaml
    For details about the parameters in the following example, see Using kubectl to Create a Service (Automatically Creating a Load Balancer).
    apiVersion: v1 
    kind: Service 
    metadata: 
      annotations:   
        kubernetes.io/elb.class: union
        kubernetes.io/elb.autocreate: 
            '{
                "type": "public",
                "bandwidth_name": "cce-bandwidth",
                "bandwidth_chargemode": "bandwidth",
                "bandwidth_size": 5,
                "bandwidth_sharetype": "PER",
                "eip_type": "5_bgp"
            }'
      labels:
        app: nginx
      name: nginx 
    spec: 
      ports: 
      - name: service0 
        port: 8080
        protocol: TCP 
        targetPort: 80
      selector: 
        app: nginx 
      type: LoadBalancer

  8. Run the following command to create the Service:

    kubectl create -f nginx-elb-svc.yaml

    If information similar to the following is displayed, the Service has been created:

    service/nginx created

  9. Run the following command to check the Service:

    kubectl get svc

    If information similar to the following is displayed, the access type has been configured, and the workload is accessible:

    NAME         TYPE           CLUSTER-IP       EXTERNAL-IP     PORT(S)          AGE
    kubernetes   ClusterIP      10.247.0.1       <none>          443/TCP          3d
    nginx        LoadBalancer   10.247.130.196   **.**.**.**   8080:31540/TCP   51s

  10. Enter the URL (for example, **.**.**.**:8080) in the address box of a browser. **.**.**.** specifies the EIP of the load balancer, and 8080 indicates the access port.

    Figure 5 Accessing nginx using the LoadBalancer Service

Follow-up Operations: Releasing Resources

To avoid additional charges, make sure to release resources promptly if you no longer require the cluster. For details, see Deleting a Cluster.