Pod: the Smallest Scheduling Unit in Kubernetes
Overview of Pod
A pod is the smallest and simplest unit in the Kubernetes object model that you create or deploy. A pod is a group of one or more containers, with shared storage and network resources, and a specification for how to run the containers. Each pod has a separate IP address.
Pods can be used in either of the following ways:
- A pod runs only one container. This is the most common usage of pods in Kubernetes. You can consider a pod as a container, but Kubernetes directly manages pods instead of containers.
- A pod runs multiple containers that need to be tightly coupled. In this scenario, a pod contains a main container and several sidecar containers, as shown in Figure 1. For example, the main container is a web server that provides file services from a fixed directory, and sidecar containers periodically download files to this fixed directory.
In Kubernetes, pods are rarely created directly. Instead, Kubernetes controller manages pods through pod instances such as Deployments and jobs. A controller typically uses a pod template to create pods. The controller can also manage multiple pods and provide functions such as replica management, rolling upgrade, and self-healing.
Creating a Pod
Kubernetes resources can be described using YAML or JSON files. The following example YAML file describes a pod named nginx. This pod contains a container named container-0 that uses the nginx:alpine image with 100m CPUs and 200 MiB of memory.
apiVersion: v1 # Kubernetes API version kind: Pod # Kubernetes resource type metadata: name: nginx # Pod name spec: # A pod specification containers: - image: nginx:alpine # Image nginx:alpine name: container-0 # Container name resources: # Resources required for this container limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 200Mi imagePullSecrets: # Secret used to pull the image, which must be default-secret on CCE - name: default-secret
As shown in the YAML comments, the YAML file includes:
- metadata: information such as name, label, and namespace
- spec: a pod specification such as image and volume used
If you check a Kubernetes resource, you can also see the status field, which indicates the status of the Kubernetes resource. This field does not need to be set when the resource is created. This example is a minimum set of parameters. Other parameters will be described later.
After defining the pod, you can use kubectl to create the pod. Assume that the preceding YAML file is named nginx.yaml, run the following command to create the pod. -f indicates that you will create the pod from a file.
$ kubectl create -f nginx.yaml pod/nginx created
After the pod is created, you can run the kubectl get pods command to obtain the pod status.
$ kubectl get pods NAME READY STATUS RESTARTS AGE nginx 1/1 Running 0 40s
The preceding command output indicates that the nginx pod is in the Running state. READY is 1/1, indicating that this pod has one container that is in the Ready state.
You can run the kubectl get command to obtain the information of a pod. -o yaml indicates that the information is returned in YAML format, and -o json indicates that the information is returned in JSON format.
$ kubectl get pod nginx -o yaml
You can also run the kubectl describe command to view the pod details.
$ kubectl describe pod nginx
Before deleting a pod, Kubernetes terminates all the containers that are part of that pod. Kubernetes sends a SIGTERM signal to the containers' main process and waits a period (30 by default) for it to shut down gracefully. If the process is not shut down during this period, Kubernetes will send a SIGKILL signal to stop the process.
You can stop and delete a pod in multiple methods. For example, you can delete a pod by name, as shown below:
$ kubectl delete po nginx pod "nginx" deleted
Delete multiple pods at one time:
$ kubectl delete po pod1 pod2
Delete all pods:
$ kubectl delete po --all pod "nginx" deleted
Delete pods by labels. For details about labels, see Label for Managing Pods.
$ kubectl delete po -l app=nginx pod "nginx" deleted
Environment Variables
You can use environment variables to set up a container runtime environment.
Environment variables add flexibility to configuration. The custom environment variables will take effect when the container is running. This frees you from rebuilding the container image.
In the following shows example, you only need to configure the environment variable spec.containers.env.
apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx:alpine name: container-0 resources: limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 200Mi env: # Environment variable - name: env_key value: env_value imagePullSecrets: - name: default-secret
Run the following command to check the environment variables in the container. The value of the env_key environment variable is env_value.
$ kubectl exec -it nginx -- env
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
HOSTNAME=nginx
TERM=xterm
env_key=env_value
Pods can use ConfigMap and secret as environment variables. For details, see Referencing a ConfigMap as an Environment Variable and Referencing a Secret as an Environment Variable.
Setting Container Startup Commands
Starting a container is to start its main process. You need to make some preparations before starting a main process. For example, you may need to configure or initialize MySQL databases before running MySQL servers. All of these operations can be performed by configuring ENTRYPOINT or CMD in a Dockerfile during image creation. As shown in the following example, configure the ENTRYPOINT ["top", "-b"] command in the Dockerfile. Then, the system will automatically perform the preparation operations during container startup.
FROM ubuntu ENTRYPOINT ["top", "-b"]
When calling an API, you only need to configure pods' containers.command field to define the command and their arguments. The first parameter is the command and the following parameters are arguments.
apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx:alpine name: container-0 resources: limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 200Mi command: # Boot command - top - "-b" imagePullSecrets: - name: default-secret
Container Lifecycle
Kubernetes provides container lifecycle hooks to enable containers to be aware of events in their management lifecycle and run code implemented in a handler when the corresponding lifecycle hook is executed. For example, if you want a container to perform a certain operation before it is stopped, you can register a hook. The following lifecycle hooks are provided:
- postStart: triggered immediately after a pod is started
- preStop: triggered immediately before a pod is stopped
You only need to set the lifecycle.postStart or lifecycle.preStop parameter of a pod, as shown in the following example:
apiVersion: v1 kind: Pod metadata: name: nginx spec: containers: - image: nginx:alpine name: container-0 resources: limits: cpu: 100m memory: 200Mi requests: cpu: 100m memory: 200Mi lifecycle: postStart: # Post-start processing exec: command: - "/postStart.sh" preStop: # Pre-stop processing exec: command: - "/preStop.sh" imagePullSecrets: - name: default-secret
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot