Updated on 2025-11-04 GMT+08:00
FAQs Related to LLM Fine-Tuning and Training
- How Do I Enable Models to Learn Unsupervised Domain-Specific Knowledge If the Data Volume Is Insufficient for Incremental Pre-training?
- How Do I Adjust Training Parameters to Maximize the Pangu Model Performance?
- How Do I Determine Whether the Pangu Model Training Status Is Normal?
- How Do I Evaluate Whether the Fine-Tuned Pangu Model Is Normal?
- How Do I Adjust Inference Parameters to Maximize the Pangu Model Performance?
- Why Does the Fine-Tuned Pangu Model Always Repeat the Same Answer?
- Why Does the Fine-Tuned Pangu Model Generate Garbled Characters?
- Why Is the Answer of the Fine-Tuned Pangu Model Truncated Abnormally?
- Why Can the Fine-Tuned Pangu Model Only Answer the Questions in the Training Sample?
- Why Does the Fine-Tuned Pangu Model Return Different Answers to the Same Question in the Training Sample?
- Why Is the Performance of the Fine-Tuned Pangu Model in Actual Scenarios Worse Than That During Evaluation?
- Why Is the Performance of the Fine-Tuned Pangu Model Unsatisfactory in Multi-Turn Dialogues?
- Why Is the Performance of the Fine-Tuned Pangu Model Unsatisfactory When the Data Volume Is Sufficient?
- Why Is the Performance of the Fine-Tuned Pangu Model Unsatisfactory Even Though Both the Data Volume and Quality Meet Requirements?
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbot