Help Center/ PanguLargeModels/ FAQs/ FAQs Related to Prompt Engineering/ Why Do Prompts That Work Well on Other Models Not Effective on Pangu Models?
Updated on 2025-11-04 GMT+08:00

Why Do Prompts That Work Well on Other Models Not Effective on Pangu Models?

  • Relationship between the prompt and the training data

    The effect of a prompt is closely related to its similarity with the training data. When the content of a prompt is similar to the sample data that the model has been exposed to during training, the model is more likely to understand the prompt and generate relevant outputs. This is because the model gradually builds an understanding of specific patterns, structures, and languages by learning a large amount of training data. If the keywords, sentence patterns, and context in the prompt are similar to the patterns in the training data, the model can recall and use the learned knowledge and instructions.

  • Effect differences between different models

    Due to different training strategies and datasets adopted by different vendors, the effect of the same prompt may vary greatly on different models. For example, some models may perform better when processing data in a specific domain, while others may perform better in general tasks.

  • Adjusting the prompt based on the Pangu model characteristics

    Prompts that work well on other models may not work well on Pangu models. To maximize the potential of the Pangu model, adjust the prompt based on the characteristics of the Pangu model.