Function
-
DevEnviron
-
During AI development, it is challenging to set up a development environment, select an AI framework and algorithm, debug code, install software, or accelerate hardware. To resolve these issues, ModelArts offers notebook for simplified development.
Both old and new versions of ModelArts notebook are available. Compared with the old version, the new version offers more optimized functions. Use the new version. The following describes the functions of the new-version notebook.
Released in: EU-Dublin
-
JupyterLab
-
The in-cloud Jupyter notebook offered by ModelArts enables online interactive development and debugging. It is out of the box, relieving you of installation and configuration.
-
-
-
Algorithm Management
-
You can upload locally developed algorithms or algorithms developed using other tools to ModelArts for unified management. You can use the algorithm you have created or subscribed to quickly create a training job on ModelArts and obtain the desired model.
Released in: EU-Dublin
-
-
Training Management
-
All the algorithms developed locally or using other tools can be uploaded to ModelArts for unified management. You can also subscribe to algorithms in AI Gallery to build models. In AI Gallery, both the built-in algorithms officially released by ModelArts and the algorithms shared by other users are available for you to subscribe to.
You can use the algorithms either you have created or you have subscribed to to quickly create training jobs on ModelArts.
Released in: EU-Dublin
-
Using Subscribed Algorithms to Develop Models
-
Both officially released algorithms and custom algorithms shared by developers are available in ModelArts AI Gallery. You can use these algorithms to build models without writing any code.
-
-
Using Custom Algorithms to Develop Models
-
If the algorithms that are available to subscribe to cannot meet service requirements or you want to migrate local algorithms to the cloud for training, use the training engines built in ModelArts to create algorithms. This is also known as using a custom script to create algorithms.
ModelArts offers almost all mainstream AI engines. These built-in engines are pre-loaded with some additional Python packages, such as NumPy. You can also use the requirements.txt file in the code directory to install dependency packages.
-
-
Using Custom Images to Develop Models
-
The built-in training engines and the algorithms that can be subscribed to apply to most training scenarios. In certain scenarios, ModelArts allows you to create custom images to train models. Custom images can be used in the cloud only after they are uploaded to the Software Repository for Container (SWR).
Customizing an image requires a deep understanding of containers. Use this method only if the algorithms that are available to subscribe to and the built-in training engines cannot meet your requirements.
-
-
-
Model Management
-
ModelArts allows you to deploy models as AI applications and centrally manages these applications. The models can be locally deployed or obtained from training jobs.
ModelArts also enables you to convert models and deploy them on different devices, such as Arm devices.
Released in: EU-Dublin
-
Importing a Meta Model from OBS Through Manual Configurations
-
In scenarios where frequently-used frameworks are used for model development and training, you can import the model to ModelArts and use it to create an AI application for unified management.
-
-
Importing Models from Custom Images
-
For an AI engine that is not supported by ModelArts, build a model for the AI engine, custom an image for the model, import the model to ModelArts, and deploy the model as AI applications.
-
-
Model Package Specifications
-
When you create an AI application in AI Application Management, if the meta model is imported from OBS or a container image, ensure the model package complies with specifications:
Edit the inference code and configuration file for subsequent inference deployment.
Note: A model trained using a built-in algorithm has had the inference code and configuration file configured. You do not need to configure them anymore.
-
-
-
Service Deployment
-
AI model deployment and large-scale implementation are complex. ModelArts provides you with a range of one-stop deployment modes that allow you to deploy trained models on devices, edges, and the cloud with just a click.
-
Real-Time Services
-
Real-time inference services feature high concurrency, low latency, and elastic scaling, and support multi-model gray release and A/B testing. You can deploy a model as a web service to provide real-time test UI and monitoring capabilities.。
Released in: EU-Dublin
-
-
Batch Services
-
Batch services are suitable for processing a large amount of data and distributed computing. You can use a batch service to perform inference on data in batches. The batch service automatically stops after data processing is completed.
Released in: EU-Dublin
-
-
-
Resource Pools
-
When you use ModelArts for AI development, you may require some CPU, GPU resources for training or inference. ModelArts provides pay-per-use public resource pools and dedicated resource pools that are queue-free, allowing you to meet a diverse range of development requirements.
-
OBS 2.0支持
-
A public resource pool provides public large-scale computing clusters, which are allocated based on job parameter settings. Resources are isolated by job. Billing of public resource pools is based on the resource specifications, duration, and instance quantity, regardless of the tasks (including training, deployment, and development) for which the pools are used. Public resource pools are available by default. You can select a public resource pool during AI development.
-
-
Dedicated Resource Pools
-
A dedicated resource pool provides exclusive compute resources, which can be used for notebook instances, training jobs, and model deployment. Dedicated resource pools deliver higher efficiency, and cannot be shared with other users. You can buy a dedicated resource pool and select it during AI development.
Released in: EU-Dublin
-
-
-
ModelArts SDK
-
ModelArts Software Development Kit (ModelArts SDK) encapsulates the ModelArts RESTful APIs in Python language to simplify application development. You can call ModelArts SDK to easily manage datasets, start AI training, generate models, and deploy models.
In notebook instances, you can use ModelArts SDK to manage OBS, training jobs, models, and real-time services without authentication configurations.Released in: EU-Dublin
-
-
免费体验
-
ModelArts免费算力限时抢。最低成本体验ModelArts,包括自动学习、开发环境(Notebook)、AI全流程开发过程。
-
-
OBS 2.0支持
-
什么是VPC对等连接?
虚拟私有云(Virtual Private Cloud,以下简称VPC),为云服务器、云容器、云数据库等资源构建隔离的、用户自主配置和管理的虚拟网络环境,提升用户云上资源的安全性,简化用户的网络部署。
您可以在VPC中定义安全组、VPN、IP地址段、带宽等网络特性。用户可以通过VPC方便地管理、配置内部网络,进行安全、快捷的网络变更。同时,用户可以自定义安全组内与组间弹性云服务器的访问规则,加强弹性云服务器的安全保护。
虚拟私有云(Virtual Private Cloud,以下简称VPC),为云服务器、云容器、云数据库等资源构建隔离的、用户自主配置和管理的虚拟网络环境,提升用户云上资源的安全性,简化用户的网络部署。
您可以在VPC中定义安全组、VPN、IP地址段、带宽等网络特性。用户可以通过VPC方便地管理、配置内部网络,进行安全、快捷的网络变更。同时,用户可以自定义安全组内与组间弹性云服务器的访问规则,加强弹性云服务器的安全保护。
除亚太-曼谷、亚太-新加坡、拉美-圣地亚哥以外的所有区域均已发布
-
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.