Help Center/
PanguLargeModels/
FAQs/
FAQs Related to LLM Fine-Tuning and Training/
Why Does the Fine-Tuned Pangu Model Generate Garbled Characters?
Updated on 2025-11-04 GMT+08:00
Why Does the Fine-Tuned Pangu Model Generate Garbled Characters?
When you ask a fine-tuned model a question that belongs to the same target task, the model's answer contains characters of other languages, abnormal symbols, or garbled characters. Locate the fault as follows:
- Data quality: Check whether the training data contains abnormal characters. If it does, you can cleanse the data using rules.
- Training parameter settings: If the data quality is poor and improper training parameter settings result in overfitting, this phenomenon is more obvious. Check the settings of training parameters such as epoch or learning_rate and reduce the values of these parameters to mitigate the risks of overfitting.
- Inference parameter settings: Check the settings of inference parameters such as temperature and top_p. Reduce the value of one of the parameters to generate deterministic outputs and avoid abnormal content.
Parent topic: FAQs Related to LLM Fine-Tuning and Training
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbot