Help Center/ PanguLargeModels/ FAQs/ FAQs Related to LLM Fine-Tuning and Training/ Why Does the Fine-Tuned Pangu Model Return Different Answers to the Same Question in the Training Sample?
Updated on 2025-11-04 GMT+08:00

Why Does the Fine-Tuned Pangu Model Return Different Answers to the Same Question in the Training Sample?

When you ask a fine-tuned model a question that is identical or closely similar to the one in the training sample, it gives a completely wrong answer. Locate the fault as follows:

  • Training parameter settings: You can draw a loss curve to check whether a problem has occurred during model training. If so, there is a high probability that underfitting has occurred due to improper training parameter settings and the model has not learned any knowledge. Check the settings of training parameters such as epoch or learning_rate. Increase the value of epoch or adjust the value of learning_rate based on the actual scenario to achieve better model convergence.
  • Data quality: Check the quality of the training data. If the training sample or its data distribution does not align with the target task, this phenomenon will be aggravated.