Help Center > > User Guide (Senior AI Engineers)> Operation Guide

Operation Guide

Updated at:Dec 28, 2020 GMT+08:00

ModelArts provides online code compiling environments as well as AI Development Lifecycle that covers data preparation, model training, model management, and service deployment for developers who are familiar with code compilation, debugging, and common AI engines, helping the engineers build models efficiently and quickly.

This document describes how to perform AI development on the ModelArts management console. If you use the APIs or SDKs for development, you are advised to view the ModelArts SDK Reference or ModelArts API Reference.

To view the examples of AI development lifecycle, see Modeling with MXNet and Modeling with Notebook. For details about how to use a built-in algorithm to build a model, see AI Beginners: Using a Built-in Algorithm to Build a Model.

AI Development Lifecycle

The AI Development Lifecycle function provided by ModelArts takes developers' habits into consideration and provides a variety of engines and scenarios for developers to choose. The following describes the entire process from data preparation to service development using the ModelArts platform.

Figure 1 Process of using ModelArts
Table 1 Process description


Sub Task



Prepare Data

Create a dataset.

Create a dataset in ModelArts to manage and preprocess your business data.

Creating a Dataset

Label data.

Label and preprocess the data in your dataset based on the business logic to facilitate subsequent training. Data labeling affects the model training performance.

Labeling Data

Publish the dataset.

After labeling data, publish the database to generate a dataset version that can be used for model training.

Publishing a Dataset

Develop Script

Create a notebook instance.

Create a notebook instance as the development environment.

Creating a Notebook Instance

Compile code.

Compile code in an existing notebook to directly build a model.

Common Operations on Jupyter Notebook

JupyterLab Overview and Common Operations

Export the .py file.

Export the compiled training script as a .py file for subsequent operations, such as model training and management.

Using the Convert to Python File Function

Train a Model

Create a training job.

Create a training job, and upload and use the compiled training script. After training is complete, a model is generated and stored in OBS.

Creating a Training Job

(Optional) Create a visualization job.

Create a visualization job (TensorBoard type) to view the model training process, learn about the model, and adjust and optimize the model. Currently, visualization jobs only support the MXNet and TensorFlow engines.

Managing a TensorBoard Job

Manage Models

Compile inference code and configuration files.

Following the model package specifications provided by ModelArts, compile inference code and configuration files for your model, and save the inference code and configuration files to the training output location.

Model Package Specifications

Import the model.

Import the training model to ModelArts to facilitate service deployment.

Importing a Model

Deploy a Model

Deploy a model as a service.

Deploy a model as a real-time service or a batch service.

Access the service.

If the model is deployed as a real-time service, you can access and use the service. If the model is deployed as a batch service, you can view the prediction result.

Using Built-in Algorithms to Build Models

AI beginners with certain AI knowledge can use their own business data and select common algorithms (ModelArts built-in algorithms) for model training to obtain new models.

Did you find this page helpful?

Submit successfully!

Thank you for your feedback. Your feedback helps make our documentation better.

Failed to submit the feedback. Please try again later.

Which of the following issues have you encountered?

Please complete at least one feedback item.

Content most length 200 character

Content is empty.

OK Cancel