Help Center/ PanguLargeModels/ FAQs/ FAQs Related to LLM Fine-Tuning and Training/ Why Can the Fine-Tuned Pangu Model Only Answer the Questions in the Training Sample?
Updated on 2025-11-04 GMT+08:00

Why Can the Fine-Tuned Pangu Model Only Answer the Questions in the Training Sample?

When you ask a fine-tuned model a question included in the training sample, the model can generate a good result. However, if you ask one that is not included in the training sample but belongs to the same target task, it gives a completely wrong answer. Locate the fault as follows:

  • Training parameter settings: You can draw a loss curve to check whether a problem has occurred during model training. If so, there is a high probability that overfitting has occurred due to improper training parameter settings. Check the settings of training parameters such as epoch or learning_rate and reduce the values of these parameters to mitigate the risks of overfitting.
  • Data quality: Check the quality of the training data. If a large amount of repetitive data exists in the training sample or the data diversity is poor, this phenomenon will be aggravated.