- What's New
- Function Overview
- Service Overview
- Billing
- Getting Started
- User Guide
- Best Practices
-
Developer Guide
- Overview
- Using Native kubectl (Recommended)
- Namespace and Network
- Pod
- Label
- Deployment
- EIPPool
- EIP
- Pod Resource Monitoring Metric
- Collecting Pod Logs
- Managing Network Access Through Service and Ingress
- Using PersistentVolumeClaim to Apply for Persistent Storage
- ConfigMap and Secret
- Creating a Workload Using Job and Cron Job
- YAML Syntax
-
API Reference
- Before You Start
- Calling APIs
- Getting Started
- Proprietary APIs
-
Kubernetes APIs
- ConfigMap
- Pod
- StorageClass
- Service
-
Deployment
- Querying All Deployments
- Deleting All Deployments in a Namespace
- Querying Deployments in a Namespace
- Creating a Deployment
- Deleting a Deployment
- Querying a Deployment
- Updating a Deployment
- Replacing a Deployment
- Querying the Scaling Operation of a Specified Deployment
- Updating the Scaling Operation of a Specified Deployment
- Replacing the Scaling Operation of a Specified Deployment
- Querying the Status of a Deployment
- Ingress
- OpenAPIv2
- VolcanoJob
- Namespace
- ClusterRole
- Secret
- Endpoint
- ResourceQuota
- CronJob
-
API groups
- Querying API Versions
- Querying All APIs of v1
- Querying an APIGroupList
- Querying APIGroup (/apis/apps)
- Querying APIs of apps/v1
- Querying an APIGroup (/apis/batch)
- Querying an APIGroup (/apis/batch.volcano.sh)
- Querying All APIs of batch.volcano.sh/v1alpha1
- Querying All APIs of batch/v1
- Querying All APIs of batch/v1beta1
- Querying an APIGroup (/apis/crd.yangtse.cni)
- Querying All APIs of crd.yangtse.cni/v1
- Querying an APIGroup (/apis/extensions)
- Querying All APIs of extensions/v1beta1
- Querying an APIGroup (/apis/metrics.k8s.io)
- Querying All APIs of metrics.k8s.io/v1beta1
- Querying an APIGroup (/apis/networking.cci.io)
- Querying All APIs of networking.cci.io/v1beta1
- Querying an APIGroup (/apis/rbac.authorization.k8s.io)
- Querying All APIs of rbac.authorization.k8s.io/v1
- Event
- PersistentVolumeClaim
- RoleBinding
- StatefulSet
- Job
- ReplicaSet
- Data Structure
- Permissions Policies and Supported Actions
- Appendix
- Out-of-Date APIs
- Change History
-
FAQs
- Product Consulting
-
Basic Concept FAQs
- What Is CCI?
- What Are the Differences Between Cloud Container Instance and Cloud Container Engine?
- What Is an Environment Variable?
- What Is a Service?
- What Is Mcore?
- What Are the Relationships Between Images, Containers, and Workloads?
- What Are Kata Containers?
- Can kubectl Be Used to Manage Container Instances?
- What Are Core-Hours in CCI Resource Packages?
- Workload Abnormalities
-
Container Workload FAQs
- Why Service Performance Does Not Meet the Expectation?
- How Do I Set the Quantity of Instances (Pods)?
- How Do I Check My Resource Quotas?
- How Do I Set Probes for a Workload?
- How Do I Configure an Auto Scaling Policy?
- What Do I Do If the Workload Created from the sample Image Fails to Run?
- How Do I View Pods After I Call the API to Delete a Deployment?
- Why an Error Is Reported When a GPU-Related Operation Is Performed on the Container Entered by Using exec?
- Can I Start a Container in Privileged Mode When Running the systemctl Command in a Container in a CCI Cluster?
- Why Does the Intel oneAPI Toolkit Fail to Run VASP Tasks Occasionally?
- Why Are Pods Evicted?
- Why Is the Workload Web-Terminal Not Displayed on the Console?
- Why Are Fees Continuously Deducted After I Delete a Workload?
-
Image Repository FAQs
- Can I Export Public Images?
- How Do I Create a Container Image?
- How Do I Upload Images?
- Does CCI Provide Base Container Images for Download?
- Does CCI Administrator Have the Permission to Upload Image Packages?
- What Permissions Are Required for Uploading Image Packages for CCI?
- What Do I Do If Authentication Is Required During Image Push?
-
Network Management FAQs
- How Do I View the VPC CIDR Block?
- Does CCI Support Load Balancing?
- How Do I Configure the DNS Service on CCI?
- Does CCI Support InfiniBand (IB) Networks?
- How Do I Access a Container from a Public Network?
- How Do I Access a Public Network from a Container?
- What Do I Do If Access to a Workload from a Public Network Fails?
- What Do I Do If Error 504 Is Reported When I Access a Workload?
- What Do I Do If the Connection Timed Out?
- Storage Management FAQs
- Log Collection
- Account
- SDK Reference
- Videos
- General Reference
Copied.
Pod
What Is a Pod?
A pod is the smallest and simplest unit in the Kubernetes object model that you create or deploy. A pod encapsulates one or more containers, storage resources, a unique network IP address, and options that govern how the container(s) should run.
Pods can be used in either of the following ways:
- One container runs in one pod. This is the most common usage of pods in Kubernetes. You can view the pod as a single encapsulated container, but Kubernetes directly manages pods instead of containers.
- Multiple containers that need to be coupled and share resources run in a pod. In this scenario, an application contains a main container and several sidecar containers, as shown in Figure 1. For example, the main container is a web server that provides file services from a fixed directory, and the sidecar container periodically downloads files to the directory.
In Kubernetes, pods are rarely created directly. Instead, controllers such as Deployments and jobs, are used to manage pods. Controllers can create and manage multiple pods, and provide replica management, rolling upgrade, and self-healing capabilities. A controller generally uses a pod template to create corresponding pods.
Container Specifications
You can use GPUs in CCI only if the namespace is of the GPU-accelerated type.
Currently, three types of pods are provided, including general-computing (used in general-computing namespaces), RDMA-accelerated, and GPU-accelerated (used in GPU-accelerated namespaces). For details about the specifications, see "Pod Specifications" in Notes and Constraints.
Creating a Pod
Kubernetes resources can be described using YAML or JSON files. For more details about the YAML format, see YAML Syntax. The following example describes a pod named nginx. This pod contains a container named container-0 and uses the nginx:alpine image, 0.5 vCPUs, and 1024 MiB memory.
apiVersion: v1 # Kubernetes API version kind: Pod # Kubernetes resource type metadata: name: nginx # Pod name spec: # Pod specification containers: - image: nginx:alpine # Used image is nginx:alpine name: container-0 # Container name resources: # Resources required for applying for a container. The values of limits and requests in CCI must be the same. limits: cpu: 500m # 0.5 vCPUs memory: 1024Mi requests: cpu: 500m # 0.5 vCPUs memory: 1024Mi imagePullSecrets: # Secret used to pull the image, which must be imagepull-secret. - name: imagepull-secret
As shown in the annotation of YAML, the YAML description file includes:
- metadata: Information such as name, label, and namespace
- spec: Pod specification such as image and volume used
If you query a Kubernetes resource, you can see the status field. This field indicates the status of the Kubernetes resource, and does not need to set when the resource is created. This example is a minimum set, and other parameter definition will be described later.
For the parameter description of Kubernetes resources, see API Reference.
After the pod is defined, you can create it using kubectl. If the YAML file is named nginx.yaml, run the following command to create the file. -f indicates that it is created in the form of a file.
$ kubectl create -f nginx.yaml -n $namespace_name pod/nginx created
The kernel version of the OS for running on the containers has been upgraded from 4.18 to 5.10.
GPU-accelerated pods support the following GPU specifications:
- nvidia.com/gpu-tesla-v100-16GB: NVIDIA Tesla V100 16GB
- nvidia.com/gpu-tesla-v100-32GB: NVIDIA Tesla V100 32GB
- nvidia.com/gpu-tesla-t4: NVIDIA Tesla T4 GPU
Container Images
A container image is a special file system, which provides the programs, libraries, resources, and configuration files required for running containers. A container image also contains configuration parameters, for example, for anonymous volumes, environment variables, and users. An image does not contain any dynamic data. Its content remains unchanged after being built.
SoftWare Repository for Container (SWR) has synchronized some common images from the container registry so that you can use the images named in the format of Image name:Tag (for example, nginx:alpine) on the internal network. You can query the synchronized images on the SWR console.
Viewing Pod Information
After the pod is created, you can run the kubectl get pods command to query the pod information, as shown below.
$ kubectl get pods -n $namespace_name NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 40s
The preceding information indicates that the nginx pod is in the Running state, indicating that the pod is running. READY is 1/1, indicating that there is one container in the pod, and the container is in the Ready state.
You can run the kubectl get command to query the configuration information about a pod. In the following command, -o yaml indicates that the pod is returned in YAML format. -o json indicates that the pod is returned in JSON format.
$ kubectl get pod nginx -o yaml -n $namespace_name
You can also run the kubectl describe command to view the pod details.
$ kubectl describe pod nginx -n $namespace_name
Deleting a Pod
When a pod is deleted, Kubernetes stops all containers in the pod. Kubernetes sends the SIGTERM signal to the process and waits for a period (30 seconds by default) to stop the container. If it is not stopped within the period, Kubernetes sends a SIGKILL signal to kill the process.
You can stop and delete a pod in multiple methods. For example, you can delete a pod by name, as shown below.
$ kubectl delete po nginx -n $namespace_name pod "nginx" deleted
Delete multiple pods at one time.
$ kubectl delete po pod1 pod2 -n $namespace_name
Delete all pods.
$ kubectl delete po --all -n $namespace_name pod "nginx" deleted
Delete pods by labels. For details about labels, see the next section.
$ kubectl delete po -l app=nginx -n $namespace_name pod "nginx" deleted
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot