Updated on 2024-01-26 GMT+08:00

Creating a Deployment

Scenario

Deployments are workloads (for example, Nginx) that do not store any data or status. You can create Deployments on the CCE console or by running kubectl commands.

Prerequisites

  • Before creating a workload, you must have an available cluster. For details on how to create a cluster, see Creating a Cluster.
  • To enable public access to a workload, ensure that an EIP or load balancer has been bound to at least one node in the cluster.

    If a pod has multiple containers, ensure that the ports used by the containers do not conflict with each other. Otherwise, creating the Deployment will fail.

Using the CCE Console

  1. Log in to the CCE console.
  2. Click the cluster name to go to the cluster console, choose Workloads in the navigation pane, and click Create Workload in the upper right corner.
  3. Set basic information about the workload.

    Basic Info
    • Workload Type: Select Deployment. For details about workload types, see Overview.
    • Workload Name: Enter the name of the workload. Enter 1 to 63 characters starting with a lowercase letter and ending with a lowercase letter or digit. Only lowercase letters, digits, and hyphens (-) are allowed.
    • Namespace: Select the namespace of the workload. The default value is default. You can also click Create Namespace to create one. For details, see Creating a Namespace.
    • Pods: Enter the number of pods of the workload.
    • Time Zone Synchronization: Specify whether to enable time zone synchronization. After time zone synchronization is enabled, the container and node use the same time zone. The time zone synchronization function depends on the local disk mounted to the container. Do not modify or delete the time zone. For details, see Configuring Time Zone Synchronization.
    Container Settings
    • Container Information
      Multiple containers can be configured in a pod. You can click Add Container on the right to configure multiple containers for the pod.
      • Basic Info: Configure basic information about the container.

        Parameter

        Description

        Container Name

        Name the container.

        Pull Policy

        Image update or pull policy. If you select Always, the image is pulled from the image repository each time. If you do not select Always, the existing image of the node is preferentially used. If the image does not exist, the image is pulled from the image repository.

        Image Name

        Click Select Image and select the image used by the container.

        To use a third-party image, see Using Third-Party Images.

        Image Tag

        Select the image tag to be deployed.

        CPU Quota

        • Request: minimum number of CPU cores required by a container. The default value is 0.25 cores.
        • Limit: maximum number of CPU cores available for a container. Do not leave Limit unspecified. Otherwise, intensive use of container resources will occur and your workload may exhibit unexpected behavior.

        If Request and Limit are not specified, the quota is not limited. For more information and suggestions about Request and Limit, see Setting Container Specifications.

        Memory Quota

        • Request: minimum amount of memory required by a container. The default value is 512 MiB.
        • Limit: maximum amount of memory available for a container. When memory usage exceeds the specified memory limit, the container will be terminated.

        If Request and Limit are not specified, the quota is not limited. For more information and suggestions about Request and Limit, see Setting Container Specifications.

        (Optional) GPU Quota

        Configurable only when the cluster contains GPU nodes and the gpu-device-plugin add-on is installed.

        • All: The GPU is not used.
        • Dedicated: GPU resources are exclusively used by the container.
        • Shared: percentage of GPU resources used by the container. For example, if this parameter is set to 10%, the container uses 10% of GPU resources.

        For details about how to use GPU in the cluster, see Default GPU Scheduling in Kubernetes.

        (Optional) Privileged Container

        Programs in a privileged container have certain privileges.

        If Privileged Container is enabled, the container is assigned privileges. For example, privileged containers can manipulate network devices on the host machine and modify kernel parameters.

        (Optional) Init Container

        Indicates whether to use the container as an init container. The init container does not support health check.

        An init container is a special container that runs before other app containers in a pod are started. Each pod can contain multiple containers. In addition, a pod can contain one or more Init containers. Application containers in a pod are started and run only after the running of all Init containers completes. For details, see Init Container.

      • (Optional) Lifecycle: Configure operations to be performed in a specific phase of the container lifecycle, such as Startup Command, Post-Start, and Pre-Stop. For details, see Setting Container Lifecycle Parameters.
      • (Optional) Health Check: Set the liveness probe, ready probe, and startup probe as required. For details, see Setting Health Check for a Container.
      • (Optional) Environment Variables: Set variables for the container running environment using key-value pairs. These variables transfer external information to containers running in pods and can be flexibly modified after application deployment. For details, see Setting an Environment Variable.
      • (Optional) Data Storage: Mount local storage or cloud storage to the container. The application scenarios and mounting modes vary with the storage type. For details, see Storage.

        If the workload contains more than one pod, EVS volumes cannot be mounted.

      • (Optional) Security Context: Set container permissions to protect the system and other containers from being affected. Enter the user ID to set container permissions and prevent systems and other containers from being affected.
      • (Optional) Logging: Report container stdout streams to AOM by default and require no manual settings. You can manually configure the log collection path. For details, see Using ICAgent to Collect Container Logs.

        To disable the standard output of the current workload, add the annotation kubernetes.AOM.log.stdout: [] in Labels and Annotations. For details about how to use this annotation, see Table 1.

    • Image Access Credential: Select the credential used for accessing the image repository. The default value is default-secret. You can use default-secret to access images in SWR. For details about default-secret, see default-secret.
    • (Optional) GPU: All is selected by default. The workload instance will be scheduled to the node with the specified GPU graphics card type.

    (Optional) Service Settings

    A Service provides external access for pods. With a static IP address, a Service forwards access traffic to pods and performs automatic load balancing for these pods.

    You can also create a Service after creating a workload. For details about Services of different types, see Overview.

    (Optional) Advanced Settings
    • Upgrade: Specify the upgrade mode and upgrade parameters of the workload. Rolling upgrade and Replace upgrade are supported. For details, see Configuring the Workload Upgrade Policy.
    • Scheduling: Configure affinity and anti-affinity policies for flexible workload scheduling. Node affinity, pod affinity, and pod anti-affinity are supported. For details, see Scheduling Policy (Affinity/Anti-affinity).
    • Toleration: Using both taints and tolerations allows (not forcibly) the pod to be scheduled to a node with the matching taints, and controls the pod eviction policies after the node where the pod is located is tainted. For details, see Taints and Tolerations.
    • Labels and Annotations: Add labels or annotations for pods using key-value pairs. After entering the key and value, click Confirm. For details about how to use and configure labels and annotations, see Labels and Annotations.
    • DNS: Configure a separate DNS policy for the workload. For details, see DNS Configuration.
    • Network Configuration:

  4. Click Create Workload in the lower right corner.

Using kubectl

The following procedure uses Nginx as an example to describe how to create a workload using kubectl.

  1. Use kubectl to connect to the cluster. For details, see Connecting to a Cluster Using kubectl.
  2. Create and edit the nginx-deployment.yaml file. nginx-deployment.yaml is an example file name. You can rename it as required.

    vi nginx-deployment.yaml

    The following is an example YAML file. For more information about Deployments, see Kubernetes documentation.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: nginx
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx
      strategy:
        type: RollingUpdate
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
          - image: nginx    # If you use an image in My Images, obtain the image path from SWR.
            imagePullPolicy: Always
            name: nginx
          imagePullSecrets:
          - name: default-secret

    For details about these parameters, see Table 1.

    Table 1 Deployment YAML parameters

    Parameter

    Description

    Mandatory/Optional

    apiVersion

    API version.

    NOTE:

    Set this parameter based on the cluster version.

    • For clusters of v1.17 or later, the apiVersion format of Deployments is apps/v1.
    • For clusters of v1.15 or earlier, the apiVersion format of Deployments is extensions/v1beta1.

    Mandatory

    kind

    Type of a created object.

    Mandatory

    metadata

    Metadata of a resource object.

    Mandatory

    name

    Name of the Deployment.

    Mandatory

    spec

    Detailed description of the Deployment.

    Mandatory

    replicas

    Number of pods.

    Mandatory

    selector

    Determines container pods that can be managed by the Deployment.

    Mandatory

    strategy

    Upgrade mode. Possible values:

    • RollingUpdate
    • ReplaceUpdate

    By default, rolling update is used.

    Optional

    template

    Detailed description of a created container pod.

    Mandatory

    metadata

    Metadata.

    Mandatory

    labels

    metadata.labels: Container labels.

    Optional

    spec:

    containers

    • image (mandatory): Name of a container image.
    • imagePullPolicy (optional): Policy for obtaining an image. The options include Always (attempting to download images each time), Never (only using local images), and IfNotPresent (using local images if they are available; downloading images if local images are unavailable). The default value is Always.
    • name (mandatory): Container name.

    Mandatory

    imagePullSecrets

    Name of the secret used during image pulling. If a private image is used, this parameter is mandatory.

    • To pull an image from the Software Repository for Container (SWR), set this parameter to default-secret.
    • To pull an image from a third-party image repository, set this parameter to the name of the created secret.

    Optional

  3. Create a Deployment.

    kubectl create -f nginx-deployment.yaml

    If the following information is displayed, the Deployment is being created.

    deployment "nginx" created

  4. Query the Deployment status.

    kubectl get deployment

    If the following information is displayed, the Deployment is running.

    NAME           READY     UP-TO-DATE   AVAILABLE   AGE 
    nginx          1/1       1            1           4m5s

    Parameter description

    • NAME: Name of the application running in the pod.
    • READY: indicates the number of available workloads. The value is displayed as "the number of available pods/the number of expected pods".
    • UP-TO-DATE: indicates the number of replicas that have been updated.
    • AVAILABLE: indicates the number of available pods.
    • AGE: period the Deployment keeps running

  5. If the Deployment will be accessed through a ClusterIP or NodePort Service, add the corresponding Service. For details, see Network.