Why Is the Cluster Console Unavailable After the Master Node Is Shut Down?
Background
After the master node is shut down, the cluster console is unavailable.
Procedure
The Cilium community does not remove the Cilium endpoint from the pod in the Terminating status. As a result, some requests are distributed to the stopped node, and the requests fail. Perform the following operations:
- Run the following command to delete the pod in the Terminating status:
kubectl get pods -nkube-system | grep Terminating | awk '{print $1}'|xargs kubectl delete pods -nkube-system
- Run the following command to check whether any pod malfunctions:
kubectl get pods -nkube-system
- After several minutes, the cluster console works properly again.
On-Premises Clusters FAQs
- Why Cannot I Connect an On-Premises Cluster to UCS?
- How Do I Manually Clear Nodes of an On-Premises Cluster?
- How Do I Downgrade a cgroup?
- How Do I Do If VM SSH Connection Times Out?
- How Do I Expand the Storage Disk Capacity of the CIA Add-on in an On-Premises Cluster?
- Why Is the Cluster Console Unavailable After the Master Node Is Shut Down?
- Why Is the Node Not Ready After Scaling-out?
- How Do I Update a CA/TLS Certificate in an On-Premises Cluster?
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbotmore