Pod
What Is Pod?
A pod is the smallest and simplest unit in the Kubernetes object model that you create or deploy. A pod encapsulates one or more containers, storage resources, a unique network IP address, and options that govern how the container(s) should run.
Pods can be used in either of the following ways:
- One container runs in one pod. This is the most common usage of pods in Kubernetes. You can view the pod as a single encapsulated container, but Kubernetes directly manages pods instead of containers.
- Multiple containers that need to be coupled and share resources run in a pod. In this scenario, an application contains a main container and several sidecar containers, as shown in Figure 1. For example, the main container is a web server that provides file services from a fixed directory, and the sidecar container periodically downloads files to the directory.
In Kubernetes, pods are rarely created directly. Instead, controllers such as Deployments and jobs, are used to manage pods. Controllers can create and manage multiple pods, and provide replica management, rolling upgrade, and self-healing capabilities. A controller generally uses a pod template to create corresponding pods.
Container Specifications
You can use GPUs in CCI only if the namespace is of the GPU-accelerated type.
Currently, three types of pods are provided, including general-computing (used in general-computing namespaces), RDMA-accelerated, and GPU-accelerated (used in GPU-accelerated namespaces).
- Specifications of NVIDIA Tesla V100 32GB are as follows:
- NVIDIA Tesla V100 32GB x 1, 4 CPU cores, 32 GB memory
- NVIDIA Tesla V100 32GB x 2, 8 CPU cores, 64 GB memory
- NVIDIA Tesla V100 32GB x 4, 16 CPU cores, 128 GB memory
- NVIDIA Tesla V100 32GB x 8, 32 CPU cores, 256 GB memory
- Specifications of NVIDIA Tesla V100 16GB are as follows:
- NVIDIA Tesla V100 16GB x 1, 4 CPU cores, 32 GB memory
- NVIDIA Tesla V100 16GB x 2, 8 CPU cores, 64 GB memory
- NVIDIA Tesla V100 16GB x 4, 16 CPU cores, 128 GB memory
- NVIDIA Tesla V100 16GB x 8, 32 CPU cores, 256 GB memory
- The total number of CPU cores in a pod can be a value in the range of 0.25-32, 48, or 64. The total number of CPU cores in a container is an integer multiple of 0.25.
- The total memory size (in GB) of a pod is an integer from 1 to 512.
- The ratio of CPU cores to memory size in a pod ranges from 1:2 to 1:8.
- A pod can have a maximum of five containers. The minimum configuration of a container is 0.25 cores and 0.2 GB. The maximum configuration of a container is the same as that of a pod.
Creating a Pod
Kubernetes resources can be described using YAML or JSON files. For more details about the YAML format, see YAML Syntax. The following example describes a pod named nginx. This pod contains a container named container-0 and uses the nginx:alpine image, 0.5-core CPU, and 1024 MB memory.
apiVersion: v1 # Kubernetes API version
kind: Pod # Kubernetes resource type
metadata:
name: nginx # Pod name
spec: # Pod specification
containers:
- image: nginx:alpine # Used image is nginx:alpine
name: container-0 # Container name
resources: # Resources required for applying for a container. The values of limits and requests in CCI must be the same.
limits:
cpu: 500m
memory: 1024Mi
requests:
cpu: 500m
memory: 1024Mi
imagePullSecrets: # Secret used to pull the image, which must be imagepull-secret.
- name: imagepull-secret
As shown in the annotation of YAML, the YAML description file includes:
- metadata: Information such as name, label, and namespace
- spec: Pod specification such as image and volume used
If you query a Kubernetes resource, you can see the status field. This field indicates the status of the Kubernetes resource, and does not need to set when the resource is created. This example is a minimum set, and other parameter definition will be described later.
For the parameter description of Kubernetes resources, see API Reference.
After the pod is defined, you can create it using kubectl. If the YAML file is named nginx.yaml, run the following command to create the file. -f indicates that it is created in the form of a file.
$ kubectl create -f nginx.yaml -n $namespace_name pod/nginx created
Using GPUs
You can use GPUs in CCI only if the namespace is of the GPU-accelerated type. To apply for GPU resources, you only need to specify GPU-related fields during container definition.
- Specifications of NVIDIA Tesla V100 32GB are as follows:
- NVIDIA Tesla V100 32GB x 1, 4 CPU cores, 32 GB memory
- NVIDIA Tesla V100 32GB x 2, 8 CPU cores, 64 GB memory
- NVIDIA Tesla V100 32GB x 4, 16 CPU cores, 128 GB memory
- NVIDIA Tesla V100 32GB x 8, 32 CPU cores, 256 GB memory
- Specifications of NVIDIA Tesla V100 16GB are as follows:
- NVIDIA Tesla V100 16GB x 1, 4 CPU cores, 32 GB memory
- NVIDIA Tesla V100 16GB x 2, 8 CPU cores, 64 GB memory
- NVIDIA Tesla V100 16GB x 4, 16 CPU cores, 128 GB memory
- NVIDIA Tesla V100 16GB x 8, 32 CPU cores, 256 GB memory
Drivers 410.104 and 418.126 are compatible with NVIDIA GPUs. The CUDA toolkit used in your application must meet the requirements listed in Table 1. For details about the compatibility between CUDA toolkits and drivers, see CUDA Compatibility at https://www.nvidia.com.
|
NVIDIA GPU Driver Version |
CUDA Toolkit Version |
|---|---|
|
410.104 |
CUDA 10.0 (10.0.130) or earlier |
|
418.126 |
CUDA 10.1 (10.1.105) or earlier |
You need to add the cri.cci.io/gpu-driver field to the metadata.annotations area of the pod to specify the GPU driver version to be used. The field can be set to one of the following values:
- gpu-410.104
- gpu-418.126
The following example shows how to create a pod with specifications of NVIDIA V100 16 GB x 1, 4 CPU cores, and 32 GB memory.
apiVersion: v1
kind: Pod
metadata:
name: gpu-test
annotations:
cri.cci.io/gpu-driver: gpu-418.126 #Specify the GPU driver version.
spec:
containers:
- image: tensorflow:latest
name: container-0
resources:
limits:
cpu: 4000m
memory: 32Gi
nvidia.com/gpu-tesla-v100-16GB: 1 #Apply for GPU resources. The value can be 1, 2, 4, or 8, indicating the number of graphics cards.
requests:
cpu: 4000m
memory: 32Gi
nvidia.com/gpu-tesla-v100-16GB: 1
imagePullSecrets:
- name: imagepull-secret
GPU-accelerated pods support the following GPU specifications:
- nvidia.com/gpu-tesla-v100-16GB: NVIDIA Tesla V100 16 GB GPU
- nvidia.com/gpu-tesla-v100-32GB: NVIDIA Tesla V100 32 GB GPU
Container Images
A container image is a special file system, which provides the programs, libraries, resources, and configuration files required for running containers. A container image also contains configuration parameters, for example, for anonymous volumes, environment variables, and users. An image does not contain any dynamic data. Its content remains unchanged after being built.
HUAWEI CLOUD SoftWare Repository for Container (SWR) has synchronized some common images from the container registry so that you can use the images named in the format of Image name:Tag (for example, nginx:alpine) on the internal network. You can query the synchronized images on the SWR console.
Viewing Pod Information
After the pod is created, you can run the kubectl get pods command to query the pod information, as shown below.
$ kubectl get pods -n $namespace_name NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 40s
The preceding information indicates that the nginx pod is in the Running state, indicating that the pod is running. READY is 1/1, indicating that there is one container in the pod, and the container is in the Ready state.
You can run the kubectl get command to query the configuration information about a pod. In the following command, -o yaml indicates that the pod is returned in YAML format. -o json indicates that the pod is returned in JSON format.
$ kubectl get pod nginx -o yaml -n $namespace_name
You can also run the kubectl describe command to view the pod details.
$ kubectl describe pod nginx -n $namespace_name
Deleting a Pod
When a pod is deleted, Kubernetes stops all containers in the pod. Kubernetes sends the SIGTERM signal to the process and waits for a period (30 seconds by default) to stop the container. If it is not stopped within the period, Kubernetes sends a SIGKILL signal to kill the process.
You can stop and delete a pod in multiple methods. For example, you can delete a pod by name, as shown below.
$ kubectl delete po nginx -n $namespace_name pod "nginx" deleted
Delete multiple pods at one time.
$ kubectl delete po pod1 pod2 -n $namespace_name
Delete all pods.
$ kubectl delete po --all -n $namespace_name pod "nginx" deleted
Delete pods by labels. For details about labels, see the next section.
$ kubectl delete po -l app=nginx -n $namespace_name pod "nginx" deleted
Last Article: Pod
Next Article: Environment Variables

Did this article solve your problem?
Thank you for your score!Your feedback would help us improve the website.