Solution Design
Background
Under the context of the digitization-enabled Double Reduction policy, the intelligent study companion integrates knowledge graph and natural language processing technologies to serve as a full-scenario language learning partner. It offers a comprehensive learning chain that combines instant Q&A, layered analysis, and situational expansion. This solution leverages advanced technologies to enable personalized learning, creating a dedicated language learning space for each student. Through intelligent guidance, learners can fully experience the charm of language and build a solid foundation in language and culture.
This section explains how to use various types of nodes to build a workflow for an intelligent assistant specializing in language knowledge. Through this case, you will learn how to incorporate Knowledge Repo nodes, Branch nodes, and Code nodes into the workflow. Beforehand, you need to create a knowledge base using the Knowledge Repo node and then use it within the workflow. In this example, you are required to set up a language knowledge base in advance and import the corresponding language knowledge question library into it.
Node Design
This section describes the key nodes in a workflow. Each node is responsible for a specific task. The functions and design principles of each node are detailed as follows:
- Start node: The Start node serves as the entry point for a workflow, receiving text input from users. In this workflow, the Start node receives the language knowledge question entered by the user.
- LLM node - question generation: This node extracts a question from user input, parses the question, and outputs the question in JSON format.
- Knowledge Repo node: This node retrieves the user' question from the uploaded question library and returns the matched information in an array. If no match is found, the array is empty.
- Branch node: This node determines whether a matched question is retrieved from the preset question library and connects to different branches based on different scenarios. If a question and answer are found, the search result is sent to the "LLM node - polished output." If no question and answer are found, the other branch "LLM node - AI output" is executed.
- LLM node - polished output: This node uses an LLM to polish the search result in the knowledge base and provides a rich solution output.
- LLM node - AI output: This node uses an LLM to provide a rich answer to a user's question and marks the answer with "This answer is generated by AI."
- Code node: This node uses code to format the output character strings of the LLM node - polished output and LLM node - AI output nodes.
- End node: This node is the final node of a workflow and outputs the final result.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot