Help Center/
PanguLargeModels/
FAQs/
FAQs Related to LLM Fine-Tuning and Training/
Why Does the Fine-Tuned Pangu Model Always Repeat the Same Answer?
Updated on 2025-11-04 GMT+08:00
Why Does the Fine-Tuned Pangu Model Always Repeat the Same Answer?
When you ask a fine-tuned model a question that belongs to the same target task, the model repeats one or more sentences in the answer. Locate the fault as follows:
- Inference parameter settings: Check the settings of inference parameters such as presence_penalty, temperature, and top_p. Increase the value of one of the parameters to improve the diversity of model outputs.
- Data quality: Check whether the training data contains repetitive text data. If it does, you can cleanse the data using rules.
- Training parameter settings: If the data quality is poor and improper training parameter settings result in overfitting, this phenomenon is more obvious. Check the settings of training parameters such as epoch or learning_rate and reduce the values of these parameters to mitigate the risks of overfitting.
Parent topic: FAQs Related to LLM Fine-Tuning and Training
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
The system is busy. Please try again later.
For any further questions, feel free to contact us through the chatbot.
Chatbot