ModelArts Studio (MaaS) Usage
MaaS offers a complete toolchain for creating foundation models using Ascend compute. It comes with ready-to-use popular open-source foundation models and helps with tasks like data production, fine-tuning, prompt engineering, and application integration. It is designed for users who need to develop production-ready models using a MaaS platform.
Context
AI models now play a vital role in driving enterprise digital transformation due to their advanced abilities in understanding language, generating content, and making decisions. Many businesses aim to improve operations using foundation models for tasks like customer support, data analytics, and automatic reporting. However, they encounter three main hurdles when they train or fine-tune foundation models: expensive compute, complicated technical demands, and integrating these systems into existing workflows. Since most companies lack skilled AI teams, building and refining models from scratch proves challenging. This often leads to slow deployments and project failures.
To tackle these challenges, MaaS offers a one-stop solution:
- Toolchain: Offers a visual training platform that simplifies model customization for businesses, requiring minimal AI expertise.
- Resource sharing: Cloud compute enables resource sharing by allowing companies to share compute and reuse pretrained models, cutting down on duplicate expenses and lowering overall compute costs.
- Scenario adaptation: Preset model templates tailored for specific industries help speed up the deployment of enterprise AI applications.
Use Cases
This section describes MaaS use cases:
- Integration of leading open-source models
MaaS integrates leading open-source models, including DeepSeek. All models are fully adapted and optimized for AI Cloud Service, resulting in improved accuracy and performance. You no longer need to build models from scratch; instead, you can simply choose suitable pre-trained models for direct application, reducing the workloads of model integration.
- Easy access to resources, pay-per-use billing, scalability, fault recovery, and resumable training
Enterprises must balance the model's performance with its costs and real-world feasibility when integrating AI models into their systems.
MaaS enables flexible model development, leveraging the Ascend Cloud compute backbone to streamline model usage in customer applications.
It allows you to scale resources on demand, with pay-per-use billing. This minimizes wasted resources and makes AI more accessible by lowering initial investment.
The architecture prioritizes high availability by duplicating data centers, so you can rest assured that your work is backed up at all times. If any fault occurs, the system seamlessly switches to a standby system, keeping your project on track without interruption or loss of time and resources.
- Application development, helping you build applications quickly
In enterprises, complex project-level tasks often require understanding the task, breaking it down into multiple decision-making questions, and then calling various subsystems to execute. MaaS uses advanced open-source models to accurately grasp business goals, break down tasks, and create multiple solutions. It helps businesses quickly and intelligently build and deploy LLM-powered applications.
Supported Region
MaaS is available only in the Hong Kong region.
Usage Process
Module |
Step |
Operation |
Document |
---|---|---|---|
Authorization |
Configuring access authorization |
All users (including individual users) can use MaaS only after agency authorization on ModelArts. Otherwise, unexpected errors may occur. |
|
Real-time inference service |
Checking built-in models in the Model Square |
ModelArts Studio provides various open-source models. You can check them on the Model Square page. The model details page shows all necessary information. You can choose suitable models for training and inference to incorporate into your enterprise systems. |
|
Deploying model services |
Deploy built-in models from Model Square using compute resources so that you can call the models in Model Experience or other service environments. |
||
API calls |
Calling model services |
After a model is deployed, call the model service in other service environments for prediction. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot