Build Process
Preparations
To ensure that there is an available NLP model, deploy one first. For details, see "Developing a Pangu NLP Model" > "Deploying an NLP Model" > "Creating an NLP Model Deployment Task" in User Guide.
Procedure
Table 1 shows the process of creating an AI reading research assistant agent.
Creating and Configuring a Knowledge Base
- On the Agent development platform, choose Workstation in the navigation pane. On the Knowledge tab page, click Create knowledge base in the upper right corner.
- On the Create knowledge base page, set parameters as instructed, such as Basic Information, Embedded Model, Rerank Model, Parsing Configuration, and Split Configuration.
- Basic Information: Set the knowledge base icon, name, and description.
- Model Configuration: Set the Embedding Model and Rerank Model.
- Parsing Configuration: Configure document parsing.
- Split Configuration: Configure document splitting.
- Click OK and upload the file.
On the Knowledge Document tab, click Upload. After the upload is complete, click OK.
- Click Hit Test in the upper right corner.
- Enter a question in the text box and click Hit Test. The lower part of the page displays multiple matched contents according to different search modes and sorts the content in descending order by matching score.
You can evaluate whether the current knowledge base meets the requirements based on the score and the amount of matched information.
Creating and Configuring a Workflow
- Log in to ModelArts Studio and choose AGENT DEVELOPMENT to go to the Agent App Dev page.
- On the Agent development platform, choose Workstation in the navigation pane. On the Workflow tab page, click Create Workflow in the upper right corner.
- Select Dialogue based workflow, enter the workflow name, English name, and description, and confirm the configuration. The workflow orchestration page is displayed.
- On the workflow orchestration page, view that the Start, LLM, and End nodes have been orchestrated.
You can click
in the upper right corner of a node to rename, copy, or delete the node. The Start and End nodes are mandatory and cannot be deleted. - Configure the Start node. Click the Start node. The node has a query parameter configured by default, indicating the content entered by a user. Click OK.
- Configure the Branch node to determine whether a user has uploaded a document.
- Drag the Branch node to the orchestration page, and connect the Start and Branch nodes.
- Click the Branch node, configure its parameters, and confirm the configuration.
- Configure the Knowledge Repo node to retrieve information based on user questions.
- Drag the Knowledge Repo node to the orchestration page, and connect the Branch and Knowledge Repo nodes.
- Click the Knowledge Repo node, configure its parameters, and confirm the configuration.
You can select the CNKI academic knowledge base created in Creating and Configuring a Knowledge Base.
- Configure the Plugin node to read documents uploaded by users.
- Configure the Branch node to determine whether there is a user who has uploaded a document.
- Drag the Branch node to the orchestration page, and connect the Branch and Knowledge Repo nodes.
- Click the Branch node, configure its parameters, and confirm the configuration.
- Configure the LLM node to extract and output document content.
- Drag the LLM node to the orchestration page, and connect the Plugin and LLM nodes.
- Click the LLM node, configure its parameters, and confirm the configuration.
- Configure the LLM node to optimize and output answers.
- Drag the LLM node to the orchestration page, connect the Branch and the LLM nodes, and connect the LLM nodes.
- Click the LLM node, configure its parameters, and confirm the configuration.
- Configure the Aggregation node to aggregate the output of knowledge retrieval and document reading.
- Drag the Aggregation node to the orchestration page, connect the Branch and the Aggregation nodes, and connect the LLM and the Aggregation nodes.
- Click the Aggregation node, configure its parameters, and confirm the configuration.
- Configure the LLM node to optimize the format of generated results.
- Drag the LLM node to the orchestration page, and connect the Aggregation and LLM nodes.
- Click the LLM node, configure its parameters, and confirm the configuration.
- Configure the End node.
- Connect the LLM node and the End node.
- Click the End node, configure its parameters, and confirm the configuration.
Debugging and Publishing a Workflow
- Click Test run in the upper right corner after the workflow is orchestrated.
Check whether the node settings are correct. For details about common node errors, see Typical Problems.
After the nodes are running properly, enable use_user_doc (optional), upload the file, and click Start running.
- During the trial run, click
in the upper right corner to view the debugging result, including the running results and call details.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot
