Updated on 2025-08-15 GMT+08:00
Using Lite Cluster Resources
- Using Snt9B for Distributed Training in a Lite Cluster Resource Pool
- Performing PyTorch NPU Distributed Training In a ModelArts Lite Resource Pool Using Ranktable-based Route Planning
- Using Snt9B for Inference in a Lite Cluster Resource Pool
- Using Ascend FaultDiag to Diagnose Logs in the ModelArts Lite Cluster Resource Pool
- Mounting an SFS Turbo File System to a Lite Cluster
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbot