Help Center/ PanguLargeModels/ FAQs/ FAQs Related to LLM Fine-Tuning and Training/ Why Is the Performance of the Fine-Tuned Pangu Model Unsatisfactory Even Though Both the Data Volume and Quality Meet Requirements?
Updated on 2025-11-04 GMT+08:00

Why Is the Performance of the Fine-Tuned Pangu Model Unsatisfactory Even Though Both the Data Volume and Quality Meet Requirements?

Locate the fault as follows:

  • Checking the training parameter settings: You can draw a loss curve to check whether a problem occurred during model training. If so, there is a high probability that underfitting or overfitting occurred due to inappropriate settings of training parameters. Check the settings of training parameters such as epoch or learning_rate and tune them if required.
  • Prompt settings: Check the prompt you use. For the same target task, you are advised to use the prompt that is the same as or similar to the training data in the inference phase to achieve the best effects of the model.
  • Model specifications: Theoretically, a model with large specifications can learn much and difficult knowledge. If the target task is difficult, you are advised to use a model with large specifications.