Updated on 2024-12-26 GMT+08:00

Overview

You can import and deploy AI models as inference services. These services can be integrated into your IT platform by calling APIs or generate batch results.

Figure 1 Introduction to inference
  1. Train a model: Models can be trained in ModelArts or your local development environment. A locally developed model must be uploaded to Huawei Cloud OBS.
  2. Create a model: Import the model file and inference file to the ModelArts model repository and manage them by version. Use these files to build an executable model.
  3. Deploy a service: Deploy the model as a service type based on your needs.