Updated on 2025-07-08 GMT+08:00

Large Model Inference Process

DataArtsFabric provides you with the entire AI development process from data preparation to model deployment in serverless mode. At each stage of the process, you can use DataArtsFabric independently. This section describes the DataArtsFabric usage process. You can select one of methods to complete AI development.

Table 1 Process description

Process

Description

Reference

Creating a workspace

Create a workspace. All subsequent operations are performed in the workspace.

Creating a Workspace

Creating an endpoint

Create an endpoint. Create endpoints of different types based on service types.

Creating an Inference Endpoint

Registering a model

You can register the fine-tuning model file stored in OBS as your fine-tuning model on the model management page.

Creating a Model

Deploying a service

DataArtsFabric supports the deployment of a model that is fine-tuned based on the base model.

Creating an Inference Service

Accessing the service

After the fine-tuning model is deployed, you can use the inference API provided by DataArtsFabric to perform inference.

Using an Inference Service for Inference