Operation Guide
ModelArts provides online code compiling environments as well as AI Development Lifecycle that covers data preparation, model training, model management, and service deployment for developers who are familiar with code compilation, debugging, and common AI engines, helping the engineers build models efficiently and quickly.
This document describes how to perform AI development on the ModelArts management console. If you use the APIs or SDKs for development, you are advised to view the ModelArts SDK Reference or ModelArts API Reference.
To view the examples of AI development lifecycle, see Modeling with MXNet and Modeling with Notebook. For details about how to use a built-in algorithm to build a model, see AI Beginners: Using a Built-in Algorithm to Build a Model.
AI Development Lifecycle
The AI Development Lifecycle function provided by ModelArts takes developers' habits into consideration and provides a variety of engines and scenarios for developers to choose. The following describes the entire process from data preparation to service development using the ModelArts platform.

Task |
Sub Task |
Description |
Reference |
---|---|---|---|
Prepare Data |
Create a dataset. |
Create a dataset in ModelArts to manage and preprocess your business data. |
|
Label data. |
Label and preprocess the data in your dataset based on the business logic to facilitate subsequent training. Data labeling determines the model training performance. |
||
Publish the dataset. |
After labeling data, publish the database to generate a dataset version that can be used for model training. |
||
Develop Script |
Create a notebook instance. |
Create a notebook instance as the development environment. |
|
Compile code. |
Compile code in an existing notebook to directly build a model. |
||
Export the .py file. |
Export the compiled training script as a .py file for subsequent operations, such as model training and management. |
||
Train a Model |
Create a training job. |
Create a training job, and upload and use the compiled training script. After training is complete, a model is generated and stored in OBS. |
|
(Optional) Create a visualization job. |
Create a visualization job (TensorBoard type) to view the model training process, learn about the model, and adjust and optimize the model. Currently, visualization jobs only support the MXNet and TensorFlow engines. |
||
Manage Models |
Compile inference code and configuration files. |
Following the model package specifications provided by ModelArts, compile inference code and configuration files for your model, and save the inference code and configuration files to the training output location. |
|
Import the model. |
Import the training model to ModelArts to facilitate service deployment. |
||
Deploy a Model |
Deploy a model as a service. |
Deploy a model as a real-time service or a batch service. |
|
Access the service. |
If the model is deployed as a real-time service, you can access and use the service. If the model is deployed as a batch service, you can view the prediction result. |
