更新时间:2026-04-15 GMT+08:00
分享

快速开始

环境准备

  • 安装Python:请确保Python 3.10及以上版本已安装。
  • 操作系统: Linux,macOS,并安装docker。
  • 执行以下命令安装SDK。(强烈建议在Python虚拟环境中安装,以避免与系统包产生冲突)
    # 创建并激活虚拟环境 (linux) 
    python -m venv venv 
    source venv/bin/activate
    
    # 安装sdk
    pip install hw-agentrun 
  • 执行以下命令配置华为云凭证,获取华为云凭证请参考认证鉴权
    export HUAWEICLOUD_SDK_AK="your-access-key"
    export HUAWEICLOUD_SDK_SK="your-secret-key"

本地开发

  1. 执行以下命令初始化一个langgraph项目,支持基本的问答能力。

    agentarts init -n my_agent -t langgraph

    执行完成后会在目录下创建如下目录和文件:

    my_agent/
    ├── agent.py              # Agent implementation
    ├── requirements.txt      # Python dependencies
    ├── Dockerfile            
    └── .agentarts_config.yaml # Project configuration

    其中,agent.py代码如下:

    # agent.py
    """
    my_agent - LangGraph Agent Implementation
    
    An agent built using LangGraph for stateful workflows, wrapped with AgentArts SDK runtime.
    
    Environment Variables:
        OPENAI_API_KEY: Your OpenAI API key (required)
            - Get it from: https://platform.openai.com/api-keys
            - Set in .agentarts_config.yaml under runtime.environment_variables
        OPENAI_BASE_URL: Custom API endpoint URL (optional)
            - Use this to connect to OpenAI-compatible APIs (e.g., Azure OpenAI, local LLMs)
            - Example: https://your-api-endpoint.com/v1
            - Leave empty to use default OpenAI API
        OPENAI_MODEL_NAME: Model name to use (optional, default: gpt-4o-mini)
            - Examples: gpt-4o, gpt-4o-mini, gpt-4-turbo, gpt-3.5-turbo
            - Or custom model names for OpenAI-compatible APIs
    
    Configuration:
        Edit .agentarts_config.yaml to customize:
        - runtime.environment_variables: Set OPENAI_API_KEY, OPENAI_MODEL_NAME and optionally OPENAI_BASE_URL
        - runtime.invoke_config: Change protocol or port
        - runtime.network_config: Configure VPC if needed
    
    Usage:
        1. Set your OPENAI_API_KEY in .agentarts_config.yaml
        2. (Optional) Set OPENAI_MODEL_NAME to use a different model
        3. (Optional) Set OPENAI_BASE_URL for custom endpoints
        4. Run: agentarts deploy
        5. Invoke: agentarts invoke '{"message": "Hello"}'
    """
    import os
    from typing import Dict, Any, TypedDict, Annotated
    from operator import add
    # 步骤1 导入sdk相关包
    from agentarts.sdk import AgentArtsRuntimeApp, RequestContext    
    # AgentArtsSDK runtime执行入口app
    app = AgentArtsRuntimeApp()
    
    # 步骤2 定义主体Agent业务逻辑
    class State(TypedDict):
        messages: Annotated[list, add]
        query: str
        response: str
    
    class LangGraphAgent:
        def __init__(self):
            self.model_name = os.environ.get("OPENAI_MODEL_NAME", "gpt-4o-mini")
            self._graph = None
    
        def _build_graph(self):
            from langgraph.graph import StateGraph, END
            from langchain_openai import ChatOpenAI
            from langchain_core.messages import HumanMessage, AIMessage
            llm = ChatOpenAI(
                model=self.model_name,
                api_key=os.environ.get("OPENAI_API_KEY"),
                base_url=os.environ.get("OPENAI_BASE_URL")
            )
            async def process_node(state: State) -> Dict[str, Any]:
                query = state.get("query", "")
                messages = state.get("messages", []) or [HumanMessage(content=query)]
                response = await llm.ainvoke(messages)
                return {
                    "messages": [AIMessage(content=response.content)],
                    "response": response.content,
                }
            workflow = StateGraph(State)
            workflow.add_node("process", process_node)
            workflow.set_entry_point("process")
            workflow.add_edge("process", END)
            return workflow.compile()
    
        async def run(self, query: str) -> Dict[str, Any]:
            graph = self._graph or self._build_graph()
            self._graph = graph
            result = await graph.ainvoke({"messages": [], "query": query, "response": ""})
            return {"response": result.get("response", "")}
    
    _agent = LangGraphAgent()
    
    # 步骤3 定义http执行入口
    @app.entrypoint  # 装饰器/invocations端点
    async def handler(payload: Dict[str, Any], context: RequestContext = None) -> Dict[str, Any]:
        query = payload.get("message", "")
        return await _agent.run(query)
    
    if __name__ == "__main__":
        app.run()

  2. 配置环境变量等信息,跳转到my_agent目录下,编辑.agentarts_config.yaml如下:

    runtime:
      environment_variables:
        - key: OPENAI_API_KEY
          value: "your-openai-api-key"
        - key: OPENAI_MODEL_NAME
          value: "gpt-4o-mini"  # Optional: gpt-4o, gpt-4-turbo, etc.
        - key: OPENAI_BASE_URL
          value: ""  # Optional: custom API endpoint

  3. 执行以下命令进行本地调试。

    pip install -r requirements.txt
    
    agentarts dev

    在服务启动后,可以访问如下地址:

    • 健康检查

      Curl调用示例

      curl http://localhost:8080/ping

      检查结果示例

      {"status":"Healthy","time_of_last_update":1775718928}
    • Agent调用

      Curl请求示例

      curl --location --request POST 'http://localhost:8080/invocations' --data-raw '{"message": "你好,你是谁"}'

      调用成功结果示例

      {"response": "你好!我是 **xxx**"}

云端部署

  1. 配置Agent部署region

    agentarts config set region cn-southwest-2

    也可执行agentarts config命令进行交互式配置,配置向导会引导您完成以下设置:(按“回车”使用默认值)

    • 部署region: 默认西南贵阳一cn-southwest-2
    • SWR组织:默认自动创建
    • SWR仓库:默认自动创建
    • 依赖文件:默认requirements.txt

    如果需要自定义配置(例如,指定部署区域、镜像仓库或者agent入站认证、环境变量等),可以手动编辑 .agentarts_config.yaml文件中的配置。

  2. 配置完成后,通过命令一键部署到AgentArts平台的云端(需要本地安装docker)

    agentarts launch 

    该命令会自动完成以下步骤:

    1. 本地构建Docker镜像。
    2. 将Docker镜像推送到SWR仓库。
    3. 部署到AgentArts运行时。

  3. 调用云端Agent

    agentarts invoke '{"message": "Hello World"}'

相关文档