What Should I Do If a Workload Remains in the Creating State?
Symptom
The workload remains in the creating state.
Troubleshooting Process
The issues here are described in order of how likely they are to occur.
Check these causes one by one until you find the cause of the fault.
Check Item 1: Whether the cce-pause Image Is Deleted by Mistake
Symptom
When creating a workload, an error message indicating that the sandbox cannot be created is displayed. This is because the cce-pause:3.1 image fails to be pulled.
Failed to create pod sandbox: rpc error: code = Unknown desc = failed to get sandbox image "cce-pause:3.1": failed to pull image "cce-pause:3.1": failed to pull and unpack image "docker.io/library/cce-pause:3.1": failed to resolve reference "docker.io/library/cce-pause:3.1": pulling from host **** failed with status code [manifests 3.1]: 400 Bad Request
Possible Causes
The image is a system image added during node creation. If the image is deleted by mistake, the workload cannot be created.
Solution
- Log in to the faulty node.
- Decompress the cce-pause image installation package.
tar -xzvf /opt/cloud/cce/package/node-package/pause-*.tgz
- Import the image.
- Docker nodes:
docker load -i ./pause/package/image/cce-pause-*.tar
- containerd nodes:
ctr -n k8s.io images import --all-platforms ./pause/package/image/cce-pause-*.tar
- Docker nodes:
- Create a workload.
Check Item 2: Modifying Node Specifications After the CPU Management Policy Is Enabled in the Cluster
The kubelet option cpu-manager-policy defaults to static. This allows granting enhanced CPU affinity and exclusivity to pods with certain resource characteristics on the node. If you modify CCE node specifications on the ECS console, the original CPU information does not match the new CPU information. As a result, workloads on the node cannot be restarted or created.
- Log in to the CCE node (ECS) and delete the cpu_manager_state file.
Example command for deleting the file:
rm -rf /mnt/paas/kubernetes/kubelet/cpu_manager_state
- Restart the node or kubelet. The following is the kubelet restart command:
systemctl restart kubelet
Verify that workloads on the node can be successfully restarted or created.
For details, see What Should I Do If I Fail to Restart or Create Workloads on a Node After Modifying the Node Specifications?
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot