Compute
Elastic Cloud Server
Huawei Cloud Flexus
Bare Metal Server
Auto Scaling
Image Management Service
Dedicated Host
FunctionGraph
Cloud Phone Host
Huawei Cloud EulerOS
Networking
Virtual Private Cloud
Elastic IP
Elastic Load Balance
NAT Gateway
Direct Connect
Virtual Private Network
VPC Endpoint
Cloud Connect
Enterprise Router
Enterprise Switch
Global Accelerator
Management & Governance
Cloud Eye
Identity and Access Management
Cloud Trace Service
Resource Formation Service
Tag Management Service
Log Tank Service
Config
OneAccess
Resource Access Manager
Simple Message Notification
Application Performance Management
Application Operations Management
Organizations
Optimization Advisor
IAM Identity Center
Cloud Operations Center
Resource Governance Center
Migration
Server Migration Service
Object Storage Migration Service
Cloud Data Migration
Migration Center
Cloud Ecosystem
KooGallery
Partner Center
User Support
My Account
Billing Center
Cost Center
Resource Center
Enterprise Management
Service Tickets
HUAWEI CLOUD (International) FAQs
ICP Filing
Support Plans
My Credentials
Customer Operation Capabilities
Partner Support Plans
Professional Services
Analytics
MapReduce Service
Data Lake Insight
CloudTable Service
Cloud Search Service
Data Lake Visualization
Data Ingestion Service
GaussDB(DWS)
DataArts Studio
Data Lake Factory
DataArts Lake Formation
IoT
IoT Device Access
Others
Product Pricing Details
System Permissions
Console Quick Start
Common FAQs
Instructions for Associating with a HUAWEI CLOUD Partner
Message Center
Security & Compliance
Security Technologies and Applications
Web Application Firewall
Host Security Service
Cloud Firewall
SecMaster
Anti-DDoS Service
Data Encryption Workshop
Database Security Service
Cloud Bastion Host
Data Security Center
Cloud Certificate Manager
Edge Security
Managed Threat Detection
Blockchain
Blockchain Service
Web3 Node Engine Service
Media Services
Media Processing Center
Video On Demand
Live
SparkRTC
MetaStudio
Storage
Object Storage Service
Elastic Volume Service
Cloud Backup and Recovery
Storage Disaster Recovery Service
Scalable File Service Turbo
Scalable File Service
Volume Backup Service
Cloud Server Backup Service
Data Express Service
Dedicated Distributed Storage Service
Containers
Cloud Container Engine
SoftWare Repository for Container
Application Service Mesh
Ubiquitous Cloud Native Service
Cloud Container Instance
Databases
Relational Database Service
Document Database Service
Data Admin Service
Data Replication Service
GeminiDB
GaussDB
Distributed Database Middleware
Database and Application Migration UGO
TaurusDB
Middleware
Distributed Cache Service
API Gateway
Distributed Message Service for Kafka
Distributed Message Service for RabbitMQ
Distributed Message Service for RocketMQ
Cloud Service Engine
Multi-Site High Availability Service
EventGrid
Dedicated Cloud
Dedicated Computing Cluster
Business Applications
Workspace
ROMA Connect
Message & SMS
Domain Name Service
Edge Data Center Management
Meeting
AI
Face Recognition Service
Graph Engine Service
Content Moderation
Image Recognition
Optical Character Recognition
ModelArts
ImageSearch
Conversational Bot Service
Speech Interaction Service
Huawei HiLens
Video Intelligent Analysis Service
Developer Tools
SDK Developer Guide
API Request Signing Guide
Terraform
Koo Command Line Interface
Content Delivery & Edge Computing
Content Delivery Network
Intelligent EdgeFabric
CloudPond
Intelligent EdgeCloud
Solutions
SAP Cloud
High Performance Computing
Developer Services
ServiceStage
CodeArts
CodeArts PerfTest
CodeArts Req
CodeArts Pipeline
CodeArts Build
CodeArts Deploy
CodeArts Artifact
CodeArts TestPlan
CodeArts Check
CodeArts Repo
Cloud Application Engine
MacroVerse aPaaS
KooMessage
KooPhone
KooDrive

Deployments

Updated on 2024-10-29 GMT+08:00

A Deployment is a service-oriented encapsulation of pods. A Deployment may manage one or more pods. These pods have the same role, and requests are routed across the pods. All pods in a Deployment share the same volume.

As described in Pods, a pod is the smallest and simplest unit in the Kubernetes object model that you create or deploy. It is designed to be an ephemeral, one-off entity. A pod can be evicted when node resources are insufficient and it automatically disappears when a cluster node fails. Kubernetes provides controllers to manage pods. Controllers can create and manage pods, and provide replica management, rolling upgrade, and self-healing capabilities. The most commonly used controller is Deployment.

A Deployment can contain one or more pod replicas. Each pod replica has the same role. Therefore, the system automatically distributes requests to multiple pod replicas of a Deployment.

A Deployment integrates a lot of functions, including rollout deployment, rolling upgrade, replica creation, and restoration of online jobs. To some extent, you can use Deployments to realize unattended rollout, which greatly reduces operation risks and improves rollout efficiency.

Figure 1 Deployment

Creating a Deployment

  1. Log in to the CCI console. In the navigation pane on the left, choose Workloads > Deployments. On the page displayed, click Create from Image.
  2. Specify basic information.

    • Workload Name

      Enter 1 to 63 characters starting and ending with a letter or digit. Only lowercase letters, digits, hyphens (-), and periods (.) are allowed. Do not enter consecutive periods or place a hyphen before or after a period. The workload name cannot be changed after creation. If you need to change the name, create another workload.

    • Namespace

      Select a namespace. If no namespaces are available, create one by following the procedure provided in Namespace.

    • Description

      Enter a description, which cannot exceed 250 characters.

    • Pods

      Specify the number of pods. A workload can have one or more pods. Each pod consists of one or more containers with the same specifications. Configure multiple pods for a workload if you want higher reliability. If one pod is faulty, the workload can still run properly.

    • Pod Specifications

      For details about the pod specifications, see "Constraints on Pod Specifications" in the Notes and Constraints.

    • Container Settings
      A pod generally contains only one container. A pod can also contain multiple containers created from different images. If your application needs to run on multiple containers in a pod, click Add Container and then select an image.
      NOTICE:

      If different containers in a pod listen to the same port, a port conflict will occur and the pod may fail to start. For example, if an Nginx container that uses port 80) has been added to a pod, a port conflict will occur when another HTTP container in the pod tries to listen to port 80.

      • My Images: images you have uploaded to SWR
        NOTE:
        • If you are an IAM user, you must obtain permissions before you can use the private images of your account. For details on how to set permissions, see (Optional) Uploading Images.
        • Currently, CCI does not support third-party image repositories.
        • A single layer of the decompressed image must not exceed 20 GB.
      • Open Source Images: displays public images in the image center.
      • Shared Images: images shared by others through SWR

      Select the image version and set the container name, vCPU, and memory. You can also enable the collection of standard output files. If you enable file collection, you will be billed for the log storage space you use.

      NOTE:

      AOM provides each account with 500-MB log storage space for free each month. You will be billed for any extra space you use on a pay-per-use basis. For details, see Product Pricing Details.

      You can also configure the following advanced settings for containers:

      • Storage: You can mount persistent volumes to containers. Currently, Elastic Volume Service (EVS) and SFS Turbo volumes are supported. Click the EVS Volumes or SFS Turbo Volumes tab, and set the volume name, capacity, container path, and disk type. After the workload is created, you can manage the storage volumes. For details, see EVS Volumes or SFS Turbo Volumes.
      • Log Collection: Application logs will be collected in the path you set. You need to configure policies to prevent logs from being over-sized. Click Add Log Storage, enter a container path for storing logs, and set the upper limit of log file size. After the workload is created, you can view logs on the AOM console. For details, see Log Management.
      • Environment Variables: You can manually set environment variables or add variable references. Environment variables add flexibility to workload configuration. The environment variables for which you have assigned values during container creation will take effect upon container startup. This saves you the trouble of rebuilding the container image.

        To manually set variables, enter the variable name and value.

        To reference variables, set the variable name, reference type, and referenced value for each variable. The following variables can be referenced: PodIP (pod IP address), PodName (pod name), and Secret. For details about how to create a secret reference, see Secrets.

      • Health Check: Container health can be checked regularly when the container is running. For details about how to configure health checks, see Setting Health Check Parameters.
      • Lifecycle: Lifecycle scripts specify actions that applications take when a lifecycle event occurs. For details about how to configure the scripts, see Container Lifecycle.
      • Startup Commands: You can set the commands to be executed immediately after the container is started. Startup commands correspond to the ENTRYPOINT startup instructions of the container engine. For details, see Setting Container Startup Commands.
      • Configuration Management: You can mount ConfigMaps and secrets to a container. For details about how to create ConfigMaps and secrets, see ConfigMaps and Secrets.

  3. Click Next: Configure Access Settings to configure access information.

    Three options are available:

    • Do not use: No entry is provided for other workloads to access the current workload. This mode is ideal for scenarios where custom service discovery is used or where access entry is not required.
    • Intranet access: Configure a domain name or internal domain name/private IP address for the current workload so that other workloads can access the current workload in an internal network. Two internal network access modes are available: Service and ELB. For details about the private network access, see Private Network Access.
    • Internet access: Configure an entry to allow other workloads to access the current workload from the Internet. HTTP, HTTPS, TCP, and UDP are supported. For details about the public network access, see Public Network Access.

  4. Click Next: Configure Advanced Settings and configure advanced settings.

    • Upgrade Policy: Rolling upgrade and In-place upgrade are available.
      • Rolling upgrade: Gradually replaces an old pod with a new pod. During the upgrade, service traffic is evenly distributed to the old and new pods to ensure service continuity.

        Maximum Number of Unavailable Pods: Maximum number of unavailable pods allowed in a rolling upgrade. If the number is equal to the total number of pods, services may be interrupted. Minimum number of alive pods = Total pods – Maximum number of unavailable pods

      • In-place upgrade: Deletes an old pod and then creates a new one. Services will be interrupted during the upgrade.
    • Client DNS Configuration: You can replace and append domain name resolution configurations. For parameter details, see Client DNS Configuration.

  5. Click Next: Confirm. After you confirm the configuration, click Submit. Then click Back to Deployment List.

    In the workload list, if the workload status is Running, the workload is created successfully. You can click the workload name to view workload details and press F5 to view the real-time workload status.

    If you want to access the workload, click the Access Settings tab to obtain the access address.

Deleting a Pod

You can manually delete pods. Because pods are controlled by a controller, a new pod will be created immediately after you delete a pod. Manual pod deletion is useful when an upgrade fails halfway or when service processes need to be restarted.

Delete a pod, as shown in Figure 2.

Figure 2 Deleting a pod

A new pod is created immediately after you delete the pod, as shown in Figure 3.

Figure 3 Result of deleting a pod

Creating a Deployment Using kubectl

For details, see Deployment.

Troubleshooting a Failure to Pull the Image

If there is an event indicating that the image fails to be pulled on the workload details page, locate the fault by following the procedure provided in What Do I Do If an Event Indicating That the Image Failed to Be Pulled Occurs?

Troubleshooting a Failure to Restart the Container

If there is an event indicating that the container fails to be restarted on the workload details page, locate the fault by following the procedure provided in What Do I Do If an Event Indicating That the Container Failed to Be Restarted Occurs?

We use cookies to improve our site and your experience. By continuing to browse our site you accept our cookie policy. Find out more

Feedback

Feedback

Feedback

0/500

Selected Content

Submit selected content with the feedback