Function
ExeML
-
ExeML is a customized code-free model development tool that helps you start AI application development from scratch. ExeML automates model design, parameter tuning and training, and model compression and deployment with the labeled data. As a developer, you only need to upload data and follow the prompts to complete model training and deployment. You do not need any coding experience.
Currently, you can use ExeML to quickly create image classification, object detection, predictive analytics, and sound classification models. ExeML is ideal for industrial, retail, and security fields.
- Image classification: identifies a class of objects in images.
- Object detection: identifies the position and class of each object in an image.
- Predictive analytics: classifies or predicts structured data.
- Sound classification: classifies and identifies sounds.
- Text classification: identifies the category of a piece of text.
Public resource pools are available in all regions except AP-Bangkok, AP-Singapore, and LA-Santiago.
-
Development Tools
-
During AI development, it is challenging to set up a development environment, select an AI framework and algorithm, debug code, install software, or accelerate hardware. To resolve these issues, ModelArts offers the development tool notebook for simplified development.
Released in: AP-Bangkok, AP-Singapore, CN-Hong Kong, and CN North-Beijing4
JupyterLab
-
The in-cloud Jupyter notebook provided by ModelArts enables online interactive development and debugging. It can be used out of the box, relieving you of installation and configuration.
-
Local IDE (PyCharm)
-
ModelArts provides the PyCharm plug-in PyCharm Toolkit, with which you can upload code, submit training jobs, and obtain training logs for local display. In this way, you only need to focus on local code development.
-
Local IDE (VS Code)
-
After creating a notebook instance with remote SSH enabled, you can access the instance through VS Code manually, with one click, through the VS Code Toolkit.
-
-
Algorithm Management
-
All the algorithms developed locally or using non-ModelArts tools can be uploaded to ModelArts for unified management. You can also subscribe to algorithms in AI Gallery to build models.
The algorithms you have created and those you have subscribed to can be used to quickly create training jobs on ModelArts to obtain your desired models.
Released in: AP-Bangkok, AP-Singapore, CN-Hong Kong, and CN North-Beijing4
-
Training Management
-
ModelArts model training allows you to view training results and tune model parameters based on the training results. You can select resource pools with different instance flavors for model training. In addition to custom models, ModelArts allows you to subscribe to algorithms from AI Gallery, after which you only need to adjust algorithm parameters to obtain your desired models.
ModelArts provides model training of both the new and old versions. The new version features enhanced functions, optimized scheduling, and improved APIs, and so is recommended. The following describes the functions of the new version.
Released in: AP-Bangkok, AP-Singapore, CN-Hong Kong, and CN North-Beijing4
Using a Subscribed Algorithm to Develop Models
-
Numerous algorithms shared by developers are available in ModelArts AI Gallery. You can use these algorithms to build models without writing any code.
-
Using a Custom Algorithm to Develop Models
-
If the algorithms that are available for subscription cannot meet service requirements or you want to migrate local algorithms to the cloud for training, use the training engines built into ModelArts to create algorithms. This process is also known as using a custom script to create algorithms.
Almost all mainstream AI engines have been built into in ModelArts. These built-in engines are pre-loaded with some additional Python packages, such as NumPy. You can also use requirements.txt in the code directory to install dependency packages.
-
Using a Custom Image to Develop Models
-
The built-in training engines and the algorithms that are available for subscription apply to most training scenarios. In certain scenarios, ModelArts allows you to create custom images to train models. Custom images can be used in the cloud only after they are uploaded to the Software Repository for Container (SWR).
Customizing an image requires a deep understanding of containers. Use this method only if your requirements cannot be met by either the algorithms that are available for subscription or the built-in training engines.
-
-
AI Application Management
-
You can use ModelArts to deploy models as AI applications, after which ModelArts centrally manages these applications. The models can be locally deployed or obtained from training jobs.
ModelArts also enables you to convert models and deploy them on different devices, such as Ascend devices.
Released in: AP-Bangkok, AP-Singapore, CN-Hong Kong, and CN North-Beijing4
Creating an AI Application Using a Meta Model with a Mainstream AI Engine
-
If a model is developed and trained using a mainstream AI engine, import the model to ModelArts and use the model to create an AI application. In this way, the AI application can be centrally managed on ModelArts.
- If your model is trained on ModelArts, directly import the model to ModelArts from the training job.
- If your model is trained locally or on a third-party platform, upload the model to OBS and then import the model from OBS to ModelArts.
-
Creating an AI Application Using a Custom Image
-
To use an AI engine that is not supported by ModelArts, create a custom image for the engine, import the image to ModelArts, and use the image to create AI applications.
-
Creating an Application Using a Template
-
Configurations for the models with the same functions are similar. To prevent repetitive configurations, ModelArts integrates the configurations of such models into a universal template. This template lets you easily and quickly import models to ModelArts and create AI applications without configuring the config.json configuration file. Each template corresponds to a specific AI engine and inference mode. With the templates, you can quickly create AI applications on ModelArts.
-
Model Package Specifications
-
When you create an AI application in AI Application Management, if the meta model is imported from OBS or a container image, ensure the model package complies with specifications.
Edit the inference code and configuration file for subsequent service deployment.
Note: A model trained using a built-in algorithm has had the inference code and configuration file configured. You do not need to configure them separately.
-
Model Conversion
-
To obtain higher compute power and performance, you can use model conversion offered by ModelArts to convert existing ModelArts models or the models created locally to the target formats for running on Ascend.
Released in: AP-Singapore and CN North-Beijing4
-
-
Service Deployment
-
AI model deployment and large-scale service application are complex. To simplify these operations, ModelArts provides you with one-stop deployment that allows you to deploy trained models on devices, edge devices, and the cloud with just one click.
Real-Time Services
-
Real-time inference services feature high concurrency, low latency, and elastic scaling, and support multi-model gray release and A/B testing. You can deploy a model as a web service with a real-time test UI and monitoring.
Released in: AP-Bangkok, AP-Singapore, CN-Hong Kong, and CN North-Beijing4
-
Batch Services
-
Batch services are suitable for processing a large amount of data and distributed computing. You can use a batch service to perform inference on data in batches. The batch service automatically stops after data processing is complete.
Released in: AP-Bangkok, AP-Singapore, CN-Hong Kong, and CN North-Beijing4
-
-
Resource Pools
-
When you use ModelArts for AI development, you may need some CPUs, GPUs, or Ascend resources for training or inference. ModelArts provides pay-per-use public resource pools and queue-free dedicated resource pools to meet diverse service requirements.
OBS 2.0支持
-
Public resource pools provide large-scale public computing clusters, which are allocated based on job parameter settings. Resources are isolated by job. Public resource pools are billed based on resource flavors, service duration, and the number of instances, regardless of tasks (training, deployment, or development) for which the pools are used. Public resource pools are available by default. You can select a public resource pool during AI development.
-
Dedicated Resource Pools
-
Dedicated resource pools provide dedicated compute resources, which can be used for notebook instances, training jobs, and model deployment. You do not need to queue for resources when using a dedicated resource pool, resulting in greater efficiency. To use a dedicated resource pool, buy one and select it during AI development.
Released in: AP-Bangkok, AP-Singapore, CN-Hong Kong, and CN North-Beijing4
-
-
AI Gallery
-
AI Gallery is a ModelArts-empowered developer ecosystem community. In this community, scientific research institutions, AI application developers, solution integrators, enterprises, and individual developers can share and purchase AI assets such as algorithms. This accelerates the development and implementation of AI assets and enables every participant in the AI development ecosystem to achieve business value.
If you are a subscriber, search for your desired AI assets in AI Gallery and view asset details. Then, subscribe to the assets that meet your service requirements and push them to ModelArts.
If you are a publisher, publish your AI assets to the AI Gallery for sharing.
Released in: AP-Singapore
Sharing and Subscribing to Algorithms
-
The asset market of AI Gallery allows you to share and subscribe to AI algorithms. As a subscriber, you can search for and subscribe to desired algorithms in AI Gallery and push them to ModelArts for building models. As a publisher, you can share the AI algorithms developed locally or in ModelArts to AI Gallery.
-
-
ModelArts SDKs
-
ModelArts Software Development Kits (ModelArts SDKs) encapsulate ModelArts REST APIs in Python language to simplify application development. You can directly call ModelArts SDKs to easily start AI training, generate models, and deploy the models as real-time services.
In notebook instances, you can use ModelArts SDKs to manage OBS data, training jobs, models, and real-time services without authentification.
Released in: AP-Bangkok, AP-Singapore, CN-Hong Kong, and CN North-Beijing4
-
Ascend Ecosystem
-
ModelArts provides continuous support for the Ascend AI ecosystem. It features multiple built-in neural network algorithms that support Ascend series chips. These algorithms are carefully optimized by Huawei ModelArts experts and deliver high precision and efficiency.
The Ascend 910 series are suitable for training image classification and object detection models. The Ascend 310 series are suitable for high-performance inference scenarios in deep learning field, such as image classification, object detection, image segmentation, and NLP.
Models developed on MindSpore (a Huawei-developed AI framework) can be directly trained and inferred on the ModelArts platform.
Provides built-in algorithms and uses Ascend chips for both training and inference
-
Ascend 310 and Ascend 910 are Huawei-developed cloud AI chips. Ascend 310 focuses on ultra-high computing efficiency and low power consumption. Based on the features of low power consumption and high computing power, ModelArts supports Ascend 310 for high-performance inference capabilities. Ascend 910 is a training chip with higher performance. It is powerful and small. ModelArts supports Ascend 910 for training capabilities and provides algorithms developed using the MindSpore engine.
ModelArts provides built-in algorithms. You can use Ascend 910 to train models and Ascend 310 to deploy services.
-
Supports MindSpore
-
MindSpore is an all-scenario AI computing framework. It can significantly reduce the training time and cost (development), run with fewer resources and the highest energy efficiency ratio (running), and adapt to all scenarios including the device, edge, and cloud (deployment).
In the training and development environments of ModelArts, the MindSpore framework can be used to build models.
-
-
免费体验
-
ModelArts免费算力限时抢。最低成本体验ModelArts,包括自动学习、开发环境(Notebook)、AI全流程开发过程。
-
OBS 2.0支持
-
什么是VPC对等连接?
虚拟私有云(Virtual Private Cloud,以下简称VPC),为云服务器、云容器、云数据库等资源构建隔离的、用户自主配置和管理的虚拟网络环境,提升用户云上资源的安全性,简化用户的网络部署。
您可以在VPC中定义安全组、VPN、IP地址段、带宽等网络特性。用户可以通过VPC方便地管理、配置内部网络,进行安全、快捷的网络变更。同时,用户可以自定义安全组内与组间弹性云服务器的访问规则,加强弹性云服务器的安全保护。
虚拟私有云(Virtual Private Cloud,以下简称VPC),为云服务器、云容器、云数据库等资源构建隔离的、用户自主配置和管理的虚拟网络环境,提升用户云上资源的安全性,简化用户的网络部署。
您可以在VPC中定义安全组、VPN、IP地址段、带宽等网络特性。用户可以通过VPC方便地管理、配置内部网络,进行安全、快捷的网络变更。同时,用户可以自定义安全组内与组间弹性云服务器的访问规则,加强弹性云服务器的安全保护。
除亚太-曼谷、亚太-新加坡、拉美-圣地亚哥以外的所有区域均已发布
-
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot