FAQ
Symptom 1: Pods cannot be scheduled to CCI. After the kubectl get node command is executed on the CCE cluster console, the output showed that the virtual-kubelet node is in the SchedulingDisabled state.
Cause: CCI resources are sold out. As a result, scheduling to CCI failed, and the bursting node will be locked (in the SchedulingDisabled state) for half an hour, during which the pods cannot be scheduled to CCI.
Solution: Use kubectl to check the status of the bursting node on the CCE cluster console. If the bursting node is locked, you can manually unlock it.
Symptom 2: Elastic scheduling to CCI is unavailable.
Cause: The subnet where the CCE cluster resides overlaps with 10.247.0.0/16, which is the CIDR block reserved for the Service in the CCI namespace.
Solution: Reset a subnet for the CCE cluster.
Symptom 3: After the bursting add-on is rolled back from 1.5.18 or later to a version earlier than 1.5.18, pods cannot be accessed through the Service.
Cause: Once the add-on is upgraded to 1.5.18 or later, the sidecar in each pod that is newly scaled to CCI is incompatible with the add-on of a version earlier than 1.5.18. So, after the add-on is rolled back, the access to the pods is abnormal. If the add-on version is earlier than 1.5.18, pods scaled to CCI are not affected.
Solutions:
- Solution 1: Upgrade the add-on to 1.5.18 or later again.
- Solution 2: Delete the pods that failed to be accessed through the Service and create pods. The new pods scaled to CCI can be accessed normally.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot