Operation Guide
ModelArts provides online code compiling environments as well as AI development lifecycle that covers data preparation, model training, model management, and service deployment for developers who are familiar with code compilation, debugging, and common AI engines, helping the developers build models efficiently and quickly.
This document describes how to perform AI development on the ModelArts management console. If you use the APIs or SDKs for development, view ModelArts SDK Reference or ModelArts API Reference.
To view the examples of AI development lifecycle, see Getting Started and Best Practices.
AI Development Lifecycle
AI development lifecycle provided by ModelArts takes developers' habits into consideration and provides a variety of engines and scenarios for developers to choose. The following describes the entire process from data preparation to service development using ModelArts.
Task |
Sub Task |
Description |
Reference |
---|---|---|---|
Development |
Creating a Notebook Instance |
Create a notebook instance as the development environment. |
|
Compiling Debugging Code |
Compile code in an existing notebook to directly build a model. |
||
Training a Model |
Selecting an Algorithm |
Before creating a training job, you need to select an algorithm. You can subscribe to a preset ModelArts algorithm or use your own algorithm. |
|
Creating a Training Job |
Create a training job, and use the compiled training script. After training is complete, a model is generated and stored in OBS. |
||
Managing AI Applications |
Compile Inference Code and Configuration Files |
Following the model package specifications provided by ModelArts, compile inference code and configuration files for your model, and save the inference code and configuration files to the training output location. |
|
Creating an AI Application |
Import a trained model to ModelArts to create an AI application, facilitating AI application deployment and publishing. |
||
Deploying AI Applications |
Deploying a Model as a Service |
Deploy a model as a real-time service or a batch service. |
|
Accessing the Service |
If the model is deployed as a real-time service, you can access and use the service. If the model is deployed as a batch service, you can view the prediction result. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot