Help Center/
PanguLargeModels/
FAQs/
FAQs Related to LLM Fine-Tuning and Training/
Why Is the Answer of the Fine-Tuned Pangu Model Truncated Abnormally?
Updated on 2025-11-04 GMT+08:00
Why Is the Answer of the Fine-Tuned Pangu Model Truncated Abnormally?
After you deploy a fine-tuned model and ask a question that belongs to the same target task, the model's answer is incomplete and is abnormally truncated. Locate the fault as follows:
- Inference parameter settings: Check the setting of the max_token parameter. Increase its value to increase the maximum length of the model's answer and avoid abnormal truncation. Note that the value of this parameter has an upper limit. Adjust it based on the actual requirements of the target task and the maximum length supported by the model.
- Model specifications: The maximum length of model outputs varies by model specifications. If the length required by the target task exceeds the maximum supported, you can choose another model.
- Data quality: Check whether the training data contains abnormal truncation. If it does, you can cleanse the data using rules.
Parent topic: FAQs Related to LLM Fine-Tuning and Training
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbot