Deployments
A Deployment is a service-oriented encapsulation of pods. A Deployment may manage one or more pods. These pods have the same role, and requests are routed across the pods. All pods in a Deployment share the same volume.
As described in Pods, a pod is the smallest and simplest unit in the Kubernetes object model that you create or deploy. It is designed to be an ephemeral, one-off entity. A pod can be evicted when node resources are insufficient and it automatically disappears when a cluster node fails. Kubernetes provides controllers to manage pods. Controllers can create and manage pods, and provide replica management, rolling upgrade, and self-healing capabilities. The most commonly used controller is Deployment.
A Deployment can contain one or more pod replicas. Each pod replica has the same role. Therefore, the system automatically distributes requests to multiple pod replicas of a Deployment.
A Deployment integrates a lot of functions, including rollout deployment, rolling upgrade, replica creation, and restoration of online jobs. To some extent, you can use Deployments to realize unattended rollout, which greatly reduces operation risks and improves rollout efficiency.
Creating a Deployment
- Log in to the CCI console. In the navigation pane on the left, choose Workloads > Deployments. On the page displayed, click Create from Image.
- Specify basic information.
- Workload Name
Enter 1 to 63 characters starting and ending with a letter or digit. Only lowercase letters, digits, hyphens (-), and periods (.) are allowed. Do not enter consecutive periods or place a hyphen before or after a period. The workload name cannot be changed after creation. If you need to change the name, create another workload.
- Namespace
Select a namespace. If no namespaces are available, create one by following the procedure provided in Namespace.
- Description
Enter a description, which cannot exceed 250 characters.
- Pods
Specify the number of pods. A workload can have one or more pods. Each pod consists of one or more containers with the same specifications. Configure multiple pods for a workload if you want higher reliability. If one pod is faulty, the workload can still run properly.
- Pod Specifications
For details about the pod specifications, see "Constraints on Pod Specifications" in the Notes and Constraints.
- Container Settings
A pod generally contains only one container. A pod can also contain multiple containers created from different images. If your application needs to run on multiple containers in a pod, click Add Container and then select an image.
If different containers in a pod listen to the same port, a port conflict will occur and the pod may fail to start. For example, if an Nginx container that uses port 80) has been added to a pod, a port conflict will occur when another HTTP container in the pod tries to listen to port 80.
- My Images: images you have uploaded to SWR
- If you are an IAM user, you must obtain permissions before you can use the private images of your account. For details on how to set permissions, see (Optional) Uploading Images.
- Currently, CCI does not support third-party image repositories.
- A single layer of the decompressed image must not exceed 20 GB.
- Open Source Images: displays public images in the image center.
- Shared Images: images shared by others through SWR
Select the image version and set the container name, vCPU, and memory. You can also enable the collection of standard output files. If you enable file collection, you will be billed for the log storage space you use.
AOM provides each account with 500-MB log storage space for free each month. You will be billed for any extra space you use on a pay-per-use basis. For details, see Product Pricing Details.
You can also configure the following advanced settings for containers:
- Storage: You can mount persistent volumes to containers. Currently, Elastic Volume Service (EVS) and SFS Turbo volumes are supported. Click the EVS Volumes or SFS Turbo Volumes tab, and set the volume name, capacity, container path, and disk type. After the workload is created, you can manage the storage volumes. For details, see EVS Volumes or SFS Turbo Volumes.
- Log Collection: Application logs will be collected in the path you set. You need to configure policies to prevent logs from being over-sized. Click Add Log Storage, enter a container path for storing logs, and set the upper limit of log file size. After the workload is created, you can view logs on the AOM console. For details, see Log Management.
- Environment Variables: You can manually set environment variables or add variable references. Environment variables add flexibility to workload configuration. The environment variables for which you have assigned values during container creation will take effect upon container startup. This saves you the trouble of rebuilding the container image.
To manually set variables, enter the variable name and value.
To reference variables, set the variable name, reference type, and referenced value for each variable. The following variables can be referenced: PodIP (pod IP address), PodName (pod name), and Secret. For details about how to create a secret reference, see Secrets.
- Health Check: Container health can be checked regularly when the container is running. For details about how to configure health checks, see Setting Health Check Parameters.
- Lifecycle: Lifecycle scripts specify actions that applications take when a lifecycle event occurs. For details about how to configure the scripts, see Container Lifecycle.
- Startup Commands: You can set the commands to be executed immediately after the container is started. Startup commands correspond to the ENTRYPOINT startup instructions of the container engine. For details, see Setting Container Startup Commands.
- Configuration Management: You can mount ConfigMaps and secrets to a container. For details about how to create ConfigMaps and secrets, see ConfigMaps and Secrets.
- My Images: images you have uploaded to SWR
- Workload Name
- Click Next: Configure Access Settings to configure access information.
Three options are available:
- Do not use: No entry is provided for other workloads to access the current workload. This mode is ideal for scenarios where custom service discovery is used or where access entry is not required.
- Intranet access: Configure a domain name or internal domain name/private IP address for the current workload so that other workloads can access the current workload in an internal network. Two internal network access modes are available: Service and ELB. For details about the private network access, see Private Network Access.
- Internet access: Configure an entry to allow other workloads to access the current workload from the Internet. HTTP, HTTPS, TCP, and UDP are supported. For details about the public network access, see Public Network Access.
- Click Next: Configure Advanced Settings and configure advanced settings.
- Upgrade Policy: Rolling upgrade and In-place upgrade are available.
- Rolling upgrade: Gradually replaces an old pod with a new pod. During the upgrade, service traffic is evenly distributed to the old and new pods to ensure service continuity.
Maximum Number of Unavailable Pods: Maximum number of unavailable pods allowed in a rolling upgrade. If the number is equal to the total number of pods, services may be interrupted. Minimum number of alive pods = Total pods – Maximum number of unavailable pods
- In-place upgrade: Deletes an old pod and then creates a new one. Services will be interrupted during the upgrade.
- Rolling upgrade: Gradually replaces an old pod with a new pod. During the upgrade, service traffic is evenly distributed to the old and new pods to ensure service continuity.
- Client DNS Configuration: You can replace and append domain name resolution configurations. For parameter details, see Client DNS Configuration.
- Upgrade Policy: Rolling upgrade and In-place upgrade are available.
- Click Next: Confirm. After you confirm the configuration, click Submit. Then click Back to Deployment List.
In the workload list, if the workload status is Running, the workload is created successfully. You can click the workload name to view workload details and press F5 to view the real-time workload status.
If you want to access the workload, click the Access Settings tab to obtain the access address.
Deleting a Pod
You can manually delete pods. Because pods are controlled by a controller, a new pod will be created immediately after you delete a pod. Manual pod deletion is useful when an upgrade fails halfway or when service processes need to be restarted.
Delete a pod, as shown in Figure 2.
A new pod is created immediately after you delete the pod, as shown in Figure 3.
Creating a Deployment Using kubectl
For details, see Deployment.
Troubleshooting a Failure to Pull the Image
If there is an event indicating that the image fails to be pulled on the workload details page, locate the fault by following the procedure provided in What Do I Do If an Event Indicating That the Image Failed to Be Pulled Occurs?
Troubleshooting a Failure to Restart the Container
If there is an event indicating that the container fails to be restarted on the workload details page, locate the fault by following the procedure provided in What Do I Do If an Event Indicating That the Container Failed to Be Restarted Occurs?
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot