Updated on 2024-10-29 GMT+08:00

Overview

Developed AI models can be used to create AI applications, which can then be quickly deployed as inference services. These services can be integrated into your IT platform by calling APIs or generate batch results.

Figure 1 Introduction to inference

  1. Train a model: Models can be trained in ModelArts or your local development environment. A locally developed model must be uploaded to Huawei Cloud OBS.
  2. Create an AI application: Import the model file and inference file to the ModelArts model repository and manage them by version. Use these files to build an executable AI application.
  3. Deploy a service: Deploy the AI application as a service type based on your needs.