Why Can the Fine-Tuned Pangu Model Only Answer the Questions in the Training Sample?
When you ask a fine-tuned model a question included in the training sample, the model can generate a good result. However, if you ask one that is not included in the training sample but belongs to the same target task, it gives a completely wrong answer. Locate the fault as follows:
- Training parameter settings: You can draw a loss curve to check whether a problem has occurred during model training. If so, there is a high probability that overfitting has occurred due to improper training parameter settings. Check the settings of training parameters such as epoch or learning_rate and reduce the values of these parameters to mitigate the risks of overfitting.
- Data quality: Check the quality of the training data. If a large amount of repetitive data exists in the training sample or the data diversity is poor, this phenomenon will be aggravated.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot