ModelArts Standard Usage
This chapter aims to help you learn how to use ModelArts Standard and get started with the ModelArts service quickly.
For developers who are experienced in coding, debugging, and working with AI engines, ModelArts provides online coding environments as well as an E2E AI development process that covers data preparation, model training, model management, and service deployment.
This document describes how to perform AI development on the ModelArts management console. If you use the APIs or SDKs for development, view ModelArts SDK Reference or ModelArts API Reference.
To view the examples of AI development lifecycle, see Getting Started and Best Practices.
Application Scenarios of ModelArts Standard
- ModelArts Standard ExeML helps you build AI models without coding. ExeML automates model design, parameter tuning and training, and model compression and deployment based on labeled data. With ExeML, you only need to upload data and perform simple operations as prompted on the ExeML GUI to train and deploy models. For details, see Introduction to ExeML.
- ModelArts Standard's workflow is a low-code AI development pipeline tool, covering data labeling, data processing, model development, training, model evaluation, and service deployment. Workflows are executed in visualized mode. For details, see What Is Workflow?.
- ModelArts Standard's development environment, notebook, provides a cloud-based JupyterLab environment and local IDE plug-ins, helping you write training and inference code and use cloud resources to debug the code. For details, see Notebook Application Scenarios.
- ModelArts Standard's model training provides GUI-based training, debugging, and production environments. You can use your own data and algorithms to train models using the compute resources provided by ModelArts Standard. For details, see Model Training.
- ModelArts Standard's inference deployment provides a GUI-based production environment for inference deployment. After an AI model is developed, you can manage it and quickly deploy it as an inference service. You can perform online inference and prediction or integrate AI inference capabilities into your IT platform by calling APIs. For details, see Overview.
Process for Using ModelArts Standard
The AI development lifecycle on ModelArts Standard allows you to experience end-to-end AI development, from preparing data to deploying a model as a service. It takes developers' habits into consideration and provides a variety of engines and scenarios for you to choose. You can use the ModelArts Standard functions as needed in each phase during AI development. The following describes the entire process from data preparation to service development using ModelArts.
Task |
Subtask |
Description |
Reference |
---|---|---|---|
Assigning permissions |
Configuring agency authorization for ModelArts |
ModelArts depends on other cloud services, and you need to configure agency authorization to allow ModelArts to access these services. |
Configuring Agency Authorization for ModelArts with One Click |
Creating an IAM user and granting ModelArts permissions |
For enterprise or university users, you need to create independent IAM users and sub-users and configure fine-grained permissions for them to achieve refined resource and permission management. |
||
(Optional) Creating an OBS bucket |
Creating an OBS bucket for ModelArts to store data |
ModelArts does not support data storage itself. The input data, output data, and cached data generated during AI development using ModelArts Standard can be stored in OBS buckets. Therefore, you are advised to create an OBS bucket before using ModelArts. You can also create an OBS bucket later when needed. |
|
(Optional) Preparing resources |
Creating a dedicated ModelArts Standard resource pool |
ModelArts Standard supports both public and dedicated resource pools. Public resource pool: When creating a training or inference task, you can choose the public resource pool directly without having to create one by yourself. If you use the public resource pool, skip this step. Dedicated resource pool: You need to purchase and create a dedicated resource pool first, but the resources are exclusively used by yourself. This step is mandatory if you use the dedicated resource pool. |
|
(Optional) Preparing data |
Creating a dataset |
ModelArts Standard supports data management. You can create datasets in ModelArts Standard for managing, preprocessing, and labeling data. If you have prepared data for training, you can directly upload the data to OBS without using the data management function. |
|
Developing and debugging code in the development environment |
Creating a notebook instance |
Create a notebook instance as the development environment for debugging training and inference code. You are advised to debug the training code in the development environment before creating a production training job. |
|
Training a model |
Preparing algorithms |
Before creating a training job, you need to prepare an algorithm. You can subscribe to an algorithm in AI Gallery or use your own algorithm. |
|
Creating a training job |
Create a training job, select the available dataset version, and use the compiled training script. After training is complete, a model is generated and stored in OBS. |
||
Managing AI applications |
Compiling inference code and configuration files |
Following the model package specifications provided by ModelArts, compile inference code and configuration files for your model, and save them to the training output path. |
|
Creating an AI application |
Import a trained model to ModelArts to create an AI application, facilitating AI application deployment and publishing. |
||
Deploying AI applications |
Deploying a model as a service |
Deploy a model as a real-time, batch, or edge service. |
|
Accessing the service |
After the service is deployed, access the real-time or edge service, or view the prediction result of the batch service. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot