Pod: the Smallest Scheduling Unit in Kubernetes
Pod
A pod is the smallest and simplest unit in the Kubernetes object model that you create or deploy. A pod encapsulates one or more containers, storage volumes, a unique network IP address, and options that govern how the containers should run.
Pods can be used in either of the following ways:
- A container is running in a pod. This is the most common usage of pods in Kubernetes. You can view the pod as a single encapsulated container, but Kubernetes directly manages pods instead of containers.
- Multiple containers that need to be coupled and share resources run in a pod. In this scenario, an application contains a main container and several sidecar containers, as shown in Figure 1. For example, the main container is a web server that provides file services from a fixed directory, and a sidecar container periodically downloads files to the directory.
In Kubernetes, pods are rarely created directly. Instead, controllers such as Deployments and jobs, are used to manage pods. Controllers can create and manage multiple pods, and provide replica management, rolling upgrade, and self-healing capabilities. A controller generally uses a pod template to create corresponding pods.
Creating a Pod
Kubernetes resources can be described using YAML or JSON files. The following example describes a pod named nginx. This pod contains a container named container-0 and uses the nginx:alpine image, 100m CPU, and 200 MiB memory.
apiVersion: v1 # Kubernetes API version kind: Pod # Kubernetes resource type metadata: name: nginx # Pod name spec: # Pod specifications containers: - image: nginx:alpine # The image used is nginx:alpine. name: container-0 # Container name resources: # Resources required for a container limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 200Mi imagePullSecrets: # Secret used to pull the image, which must be default-secret on CCE - name: default-secret
As shown in the annotation of YAML, the YAML description file includes:
- metadata: information such as name, label, and namespace
- spec: pod specification such as image and volume used
If you query a Kubernetes resource, you can see the status field. This field indicates the status of the Kubernetes resource, and does not need to be set when the resource is created. This example is a minimum set. Other parameter definition will be described later.
After the pod is defined, you can create it using kubectl. Assume that the preceding YAML file is named nginx.yaml, run the following command to create the file. -f indicates that it is created in the form of a file.
$ kubectl create -f nginx.yaml pod/nginx created
After the pod is created, you can run the kubectl get pods command to query the pod information, as shown below.
$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 40s
The preceding information indicates that the nginx pod is in the Running state, indicating that the pod is running. READY is 1/1, indicating that there is one container in the pod, and the container is in the Ready state.
You can run the kubectl get command to query the configuration information about a pod. In the following command, -o yaml indicates that the pod is returned in YAML format. -o json indicates that the pod is returned in JSON format.
$ kubectl get pod nginx -o yaml
You can also run the kubectl describe command to view the pod details.
$ kubectl describe pod nginx
When a pod is deleted, Kubernetes stops all containers in the pod. Kubernetes sends the SIGTERM signal to the process and waits for a period (30 seconds by default) to stop the container. If it is not stopped within the period, Kubernetes sends a SIGKILL signal to kill the process.
You can stop and delete a pod in multiple methods. For example, you can delete a pod by name, as shown below.
$ kubectl delete po nginx pod "nginx" deleted
Delete multiple pods at one time.
$ kubectl delete po pod1 pod2
Delete all pods.
$ kubectl delete po --all pod "nginx" deleted
Delete pods by labels. For details about labels, see Label for Managing Pods.
$ kubectl delete po -l app=nginx pod "nginx" deleted
Environment Variables
Environment variables are set in the container running environment.
Environment variables add flexibility to workload configuration. The environment variables for which you have assigned values during container creation will take effect when the container is running. This saves you the trouble of rebuilding the container image.
The following shows how to use an environment variable. You only need to configure the spec.containers.env field.
apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx:alpine name: container-0 resources: limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 200Mi env:: # Environment variable - name: env_key value: env_value imagePullSecrets: - name: default-secret
Run the following command to check the environment variables in the container. The value of the env_key environment variable is env_value.
$ kubectl exec -it nginx -- env PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin HOSTNAME=nginx TERM=xterm env_key=env_value
Environment variables can also reference ConfigMap and secret. For details, see Referencing a ConfigMap as an Environment Variable and Referencing a Secret as an Environment Variable.
Setting Container Startup Commands
Starting a container is to start the main process. Some preparations must be made before the main process is started. For example, you may configure or initialize MySQL databases before running MySQL servers. You can set ENTRYPOINT or CMD in the Dockerfile when creating an image. As shown in the following example, the ENTRYPOINT ["top", "-b"] command is set in the Dockerfile. This command will be executed during container startup.
FROM ubuntu ENTRYPOINT ["top", "-b"]
When calling an API, you only need to configure the containers.command field of the pod. This field is of the list type. The first parameter in the field is the command to be executed, and the subsequent parameters are the command arguments.
apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx:alpine name: container-0 resources: limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 200Mi command: # Startup command - top - "-b" imagePullSecrets: - name: default-secret
Container Lifecycle
Kubernetes provides container lifecycle hooks. The hooks enable containers to run code triggered by events during their management lifecycle. For example, if you want a container to perform a certain operation before it is stopped, you can register a hook. The following lifecycle hooks are provided:
- postStart: triggered immediately after the workload is started
- preStop: triggered immediately before the workload is stopped
You only need to set the lifecycle.postStart or lifecycle.preStop parameter of the pod, as shown in the following:
apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx:alpine name: container-0 resources: limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 200Mi lifecycle: postStart: # Post-start processing exec: command: - "/postStart.sh" preStop: # Pre-stop processing exec: command: - "/preStop.sh" imagePullSecrets: - name: default-secret
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot