Understanding Task Logic
You need to understand how an LLM performs your target tasks efficiently, and you need to clearly describe the task requirements in the prompts.
For instance, a document Q&A task emphasizes data extraction over content generation. This means the LLM's main task is to retrieve the relevant information from the documents to answer your query, without generating personal opinions or ideas, or altering the text's original format. An inappropriate prompt could be, "Please read the aforementioned document and generate an answer to the following question." This prompt may cause the model to incorporate information that is not relevant to the documents it is meant to retrieve from.
In a task for generalizing questions, the LLM is expected to reformulate your question while maintaining its original meaning, rather than creating an analogous question. When given a prompt like "Please generate 10 questions similar to 'How do you transfer money through mobile banking?'", the LLM tends to focus on capturing similar entities (in this case, mobile banking), keywords, and scenarios, rather than generating questions with the same semantic level required by the task. This can lead to divergent or unexpected outcomes.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot