Updated on 2025-11-19 GMT+08:00

Build Process

Preparations

To ensure that there is an available NLP model, deploy one first. For details, see "Developing a Pangu NLP Model" > "Deploying an NLP Model" > "Creating an NLP Model Deployment Task" in User Guide.

Procedure

Table 1 shows the process of creating an AI reading research assistant agent.

Table 1 Building an Intelligent Assistant Workflow with Low Code

Step

Description

Creating a Knowledge Base for a Literal Question Library

Describes how to create and configure a knowledge base.

Creating and Configuring a Workflow

Describes how to create and configure a workflow.

Debugging and Publishing a Workflow

Describes how to perform a trial run on the entire workflow to ensure that the workflow runs properly.

Creating a Knowledge Base for a Literal Question Library

  1. On the Agent development platform, choose Workstation in the navigation pane. On the Knowledge tab page, click Create knowledge base in the upper right corner.
  2. On the Create knowledge base page, set parameters as instructed, such as Basic Information, Embedded Model, Rerank Model, Parsing Configuration, and Split Configuration.
    • Basic Information: Set the knowledge base icon, name, and description.
    • Model Configuration: Set the Embedding Model and Rerank Model.
    • Parsing Configuration: Configure document parsing.
    • Split Configuration: Configure document splitting.
  3. Click OK and upload the file.

    On the Knowledge Document tab, click Upload. After the upload is complete, click OK.

  4. Click Hit Test in the upper right corner.
  5. Enter a question in the text box and click Hit Test. The lower part of the page displays multiple matched contents according to different search modes and sorts the content in descending order by matching score.

    You can evaluate whether the current knowledge base meets the requirements based on the score and the amount of matched information.

Creating and Configuring a Workflow

  1. Log in to ModelArts Studio and choose AGENT DEVELOPMENT to go to the Agent App Dev page.
  2. On the Agent development platform, choose Workstation in the navigation pane. On the Workflow tab page, click Create Workflow in the upper right corner.
  3. Select Dialogue based workflow, enter the workflow name, English name, and description, and confirm the configuration. The workflow orchestration page is displayed.
  4. On the workflow orchestration page, view that the Start, LLM, and End nodes have been orchestrated.

    You can click in the upper right corner of a node to rename, copy, or delete the node. The Start and End nodes are mandatory and cannot be deleted.

  5. Configure the Start node. Click the Start node. The node has a query parameter configured by default, indicating the content entered by a user. In the current scenario, you do not need to add any parameter. Click OK.
  6. Configure the LLM node to extract questions from user input, parse the questions, and output the questions in JSON format.
    1. Drag the LLM node to the orchestration page, and connect the Start and LLM nodes.
    2. Click the LLM node, configure its parameters, and confirm the configuration.
  7. Configure the Knowledge Repo node to retrieve users' questions from the uploaded question library and return the matched information.
    1. Drag the Knowledge Repo node to the orchestration page, and connect the LLM (question generation) and Knowledge Repo nodes.
    2. Click the Knowledge Repo node, configure its parameters, and confirm the configuration.
      • Input parameters

        Parameter name: The default parameter name is query.

        Type and value: Select Reference > question. question is the output variable value of the LLM (question generation) node.

      • Select the knowledge base created in Creating a Knowledge Base for a Literal Question Library.
  8. Configure the Branch node to determine whether to retrieve matched questions from the preset question library.
    1. Drag the Branch node to the orchestration page, and connect the Branch and Knowledge Repo nodes.
    2. Click the Branch node, configure its parameters, and confirm the configuration.

      For the first branch, the parameter is output_list output by the Knowledge Repo node, the comparison condition is Length is greater than, the comparison object is Input, and the value is 0.

  9. Configure the LLM node to polish the search result in the knowledge base and provide a rich solution output.
    1. Drag the LLM node to the orchestration page, and connect the first branch of the Branch node to the LLM node. If the number of retrieved results is greater than 0, the LLM node will be executed.
    2. Click the LLM node, configure its parameters, and confirm the configuration.
      Input parameters:
      • Parameter name: The default parameter name is input.
      • Type and value: Select Reference > output_list. output_list is the output variable value of the Knowledge Retrieval node.
  10. Configure the "LLM node - AI output" node. This node uses an LLM to provide a rich answer to a user's question and marks the answer with "(This answer is generated by AI.)"
    1. Drag the LLM node to the orchestration page, and connect the second branch of the Branch node to the LLM node. This LLM node will be triggered if no relevant answer is found during the knowledge retrieval process.
    2. Click the LLM node, configure its parameters, and confirm the configuration.

      Input parameters:

      • Parameter name: The default parameter name is input.
      • Type and value: Select Reference > question. question is the output variable value of the question generation node.
  11. Configure the Code node. This node uses code to format the output character strings of the "LLM node - polished output" and "LLM node - AI output" nodes.
    1. Drag the Code node from the left to the orchestration page, and connect the LLM node - polished output and LLM node - AI output nodes to the Code node. Click the Code node for configuration. Write code to generate return values based on the input variables.
    2. Click the Code node, configure its parameters, and confirm the configuration.
      • In the parameter configuration area, configure the input parameters {{str1}} and {{str2}}.
        Table 2 Input parameters

        Parameter

        Type

        Value

        str1

        Reference

        Output of the "LLM node - polished output"

        str2

        Reference

        Output of the "LLM node - AI output"

      • In the code configuration, write Python code to process input variables. A main function needs to be defined. The Code node contains the code template of the main function. You can compile your own code based on the template. The arg.get method is required for obtaining input variables.

        In this workflow, the Code node is used to merge and format the output of the previous two nodes.

  12. Configure the End node to output the final result.
    1. Connect the Code node to the End node.
    2. Click the End node and configure the input parameters and reply.

Debugging and Publishing a Workflow

  1. After the workflow is orchestrated, click Test run in the upper right corner.
  2. During the trial run, click in the upper right corner to view the debugging result, including the running results and call details.
  3. If necessary, debug a node in the workflow to ensure that the node can run properly.
    1. On the workflow orchestration page, click of the AI output node to go to the node debugging page.
    2. In the configuration information area of the node, set the input parameters and click Start running.
    3. After successfully debugging a single node, confirm that the message "Running successful" is displayed and check the duration of the node's execution.