Updated on 2024-06-17 GMT+08:00

Deployments

The federation function of UCS allows you to manage Kubernetes clusters in different regions or clouds, deploy applications globally in a unified manner, and deploy different workloads, such as Deployments, StatefulSets, and DaemonSets, to clusters in a federation.

Deployments are a type of workloads that do not store any data or status while running. An example of this is Nginx. You can create a Deployment using the console or kubectl.

Creating a Deployment

  1. Log in to the UCS console. In the navigation pane, choose Fleets.
  2. On the Fleets tab, click the name of the federation-enabled fleet to access its details page.
  3. In the navigation pane, choose Workloads. On the displayed page, click the Deployments tab. Then, click Create from Image.

    To use an existing YAML file to create a Deployment, click Create from YAML in the upper right corner.

  4. Configure basic information about the workload.

    • Type: Select Deployment.
    • Name: name of the workload, which must be unique.
    • Namespace: namespace that the workload belongs to. For details about how to create a namespace, see Creating a Namespace.
    • Description: description of the workload.
    • Pods: number of pods in each cluster of the multi-cluster workload. The default value is 2. Each workload pod consists of the same containers. On UCS, you can set an auto scaling policy to dynamically adjust the number of workload pods based on the workload resource usage.

  5. Configure the container settings for the workload.

    Multiple containers can be configured in a pod. You can click Add Container on the right to configure multiple containers for the pod.

    Figure 1 Container settings

    • Basic Info
      Table 1 Basic information parameters

      Parameter

      Description

      Container Name

      Name the container.

      Image Name

      Click Select Image and select the image used by the container.
      • My Images: images in the Huawei Cloud image repository of the current region. If no image is available, click Upload Image to upload an image.
      • Open Source Images: official images in the open source image repository.
      • Shared Images: private images shared by another account. For details, see Sharing Private Images.

      Image Tag

      Select the image tag to be deployed.

      Pull Policy

      Image update or pull policy. If you select Always, the image is pulled from the image repository each time. If you do not select Always, the existing image of the node is preferentially used. If the image does not exist in the node, it is pulled from the image repository.

      CPU Quota

      • Request: minimum number of CPU cores required by a container. The default value is 0.25 cores.
      • Limit: maximum number of CPU cores available for a container. Do not leave Limit unspecified. Otherwise, intensive use of container resources will occur and your workload may exhibit unexpected behavior.

      Memory Quota

      • Request: minimum amount of memory required by a container. The default value is 512 MiB.
      • Limit: maximum amount of memory available for a container. When memory usage exceeds the specified memory limit, the container will be terminated.

      For details about Request and Limit of CPU or memory, see Setting Container Specifications.

      Init Container

      Select whether to use the container as an init container.

      An init container is a special container that runs before app containers in a pod. For details, see Init Containers.

    • Lifecycle: The lifecycle callback functions can be called in specific phases of the container. For example, if you want the container to perform a certain operation before stopping, set the corresponding function. Currently, lifecycle callback functions, such as startup, post-start, and pre-stop are provided. For details, see Setting Container Lifecycle Parameters.
    • Health Check: Set health check parameters to periodically check the health status of the container during container running. For details, see Setting Health Check for a Container.
    • Environment Variable: Environment variables affect the way a running container will behave. Configuration items set by environment variables will not change if the pod lifecycle ends. For details, see Setting Environment Variables.
    • Data Storage: Store container data using Local Volumes and PersistentVolumeClaims (PVCs). You are advised to use PVCs to store workload pod data on a cloud volume. If you store pod data on a local volume and a fault occurs on the node, the data cannot be restored. For details about container storage, see Storage.
    • Security Context: Set container permissions to protect the system and other containers from being affected. Enter a user ID and the container will run with the user permissions you specify.
    • Image Access Credential: Select the credential for accessing the image repository. This credential is used only for accessing a private image repository. If the selected image is a public image, you do not need to select a secret. For details on how to create a secret, see Creating a Secret.

  6. (Optional) Click in the Service Settings area to configure a Service for the workload.

    If your workload will be reachable to other workloads or public networks, add a Service to define the workload access type. The workload access type determines the network attributes of the workload. Workloads with different access types can provide different network capabilities. For details, see Services and Ingresses.

    You can also create a Service after creating a workload. For details, see ClusterIP and NodePort.

    • Name: name of the Service to be added. It is customizable and must be unique.
    • Type
      • ClusterIP: The Service is only reachable from within the cluster.
      • NodePort: The Service can be accessed from any node in the cluster.
    • Affinity (for node access only)
      • Cluster-level: The IP addresses and access ports of all nodes in a cluster can be used to access the workloads associated with the Service. However, performance loss is introduced due to hops, and source IP addresses cannot be obtained.
      • Node-level: Only the IP address and access port of the node where the workload is located can be used to access the workload associated with the Service. Service access will not cause performance loss due to route redirection, and the source IP address of the client can be obtained.
    • Port
      • Protocol: Select TCP or UDP.
      • Service Port: Port mapped to the container port at the cluster-internal IP address. The application can be accessed at <cluster-internal IP address>:<access port>. The port number range is 1–65535.
      • Container Port: Port on which the workload listens, defined in the container image. For example, the Nginx application listens on port 80 (container port).
      • Node Port (for NodePort only): Port to which the container port will be mapped when the node private IP address is used for accessing the application. The port number range is 30000–32767. You are advised to select Auto.
        • Auto: The system automatically assigns a port number.
        • Custom: Specify a fixed node port. The port number range is 30000–32767. Ensure that the port is unique in a cluster.

  7. (Optional) Click Expand to set advanced settings for the workload.

    • Upgrade: upgrade mode of the Deployment, including Replace upgrade and Rolling upgrade. For details, see Configuring a Workload Upgrade Policy.
      • Rolling upgrade: An old pod is gradually replaced with a new pod. During the upgrade, service traffic is evenly distributed to the old and new pods to ensure service continuity.
      • Replace upgrade: Old pods are deleted before new pods are created. Services will be interrupted during a replace upgrade.
    • Scheduling: You can set affinity and anti-affinity to implement planned scheduling for pods. For details, see Configuring a Scheduling Policy (Affinity/Anti-affinity).
    • Labels and Annotations: You can click Confirm to add a label or annotation for the pod. The key of the new label or annotation cannot be the same as that of an existing one.
    • Toleration: When the node where the workload pods are located is unavailable for the specified amount of time, the pods will be rescheduled to other available nodes. By default, the toleration time window is 300s.

  8. Click Next: Scheduling and Differentiation. After selecting clusters to which the workload can be scheduled, configure the differentiated settings for the containers.

    • Scheduling Policy
      • Scheduling Mode
        • Weight: Manually set the weight of each cluster. The number of pods in each cluster is allocated based on the configured weight.
        • Auto balancing: The workload is automatically deployed in the selected clusters based on available resources.
      • Cluster: Select clusters to which the workload can be scheduled. The number of clusters depends on your service requirements.
        • If you use cluster weighted scheduling, you need to manually set the weight of each cluster. If you set the weight of a cluster to a value other than 0, the cluster is automatically selected as a cluster to which the workload can be scheduled. If you set it to 0, the workload will not be scheduled to the cluster. Weights cannot be set for clusters in abnormal state.
        • If you use auto scaling, you can click a cluster to select it as a cluster to which the workload can be scheduled.
    • Differentiated Settings

      When deploying a workload in multiple clusters, you can configure differentiated settings for these clusters. Click in the upper right corner of a target cluster to configure differentiated settings. The configured differentiated container settings take effect only for this cluster.

      For parameter description, see Container Settings.

  9. After completing the settings, click Create Workload, then you can click Back to Workload List to view the created workload.