Viewing a Built-in Model in ModelArts Studio (MaaS)
ModelArts Studio provides various open-source models. You can check them on the Model Square. The model details page shows all necessary information. You can choose suitable models for training and inference to incorporate into your enterprise systems.
Prerequisites
You have registered a Huawei account and enabled Huawei Cloud services.
Accessing the Model Square
- Log in to the ModelArts Studio (MaaS) console and select the target region on the top navigation bar.
- In the navigation pane on the left, choose Model Square.
- In the Model Filtering area on the Model Square page, filter models by type, context length, advanced features, series, and supported jobs, or search by model name.
For details about model series, see Supported Models.
Table 1 Model filtering Filter Criteria
Description
Type
You can filter models by type, including text generation.
Context Length
You can filter models by context length, including 64K, 32K, 16K, and no more than 8K.
Advanced Capabilities
You can filter models based on multiple capabilities, such as deep thinking.
Model
You can filter models by series, including DeepSeek and Qwen.
Supported Job Types
You can filter models by job type, including deployment.
- Perform the following operations on the target model card on the Model Square page:
- Hover over the model card to view the operation buttons. Click them as needed. For details about how to deploy a model service, see Deploying a Model Service in ModelArts Studio (MaaS).
- Click the model card. On the model details page that is displayed, view the model introduction, basic information, and version information. In the upper right corner of the page, you can click buttons as needed to use the model for training or inference.
- The upper-right corner shows only the operations the model supports. These options change based on the specific model.
- If the model involves billing, the billing information is displayed in the Basic Information tab.
Supported Models
The table below lists the models supported by MaaS. For details about the models, go to the model details page.
Model Series |
Type |
Use Case |
Supported Language |
Supported Region |
Model Introduction |
|
---|---|---|---|---|---|---|
DeepSeek |
DeepSeek-R1 |
Text generation |
Q&A and text generation inference |
Chinese and English |
CN-Hong Kong |
DeepSeek-R1 uses advanced technology for long-context understanding and fast inference. The model supports multimodal interactions and API integrations. It enhances applications like intelligent customer service and data analytics, offering top cost-effectiveness for intelligent upgrades of enterprises. |
DeepSeek-V3 |
Text generation |
Q&A and translation |
Chinese and English |
CN-Hong Kong |
DeepSeek-V3 is a strong MoE language model. It uses a new load balancing method without extra loss and aims for better performance with multi-token predictions. |
|
DeepSeek-V3.1 |
Text generation |
Q&A |
Chinese and English |
CN-Hong Kong |
DeepSeek-V3.1 is a hybrid model with thinking and non-thinking modes. It matches DeepSeek-R1-0528's performance but offers quicker responses and better tool optimization. |
|
DeepSeek-R1-Distill-Qwen-14B |
Text generation |
Q&A and text generation inference |
Chinese and English |
CN-Hong Kong |
Qwen-14B is distilled from DeepSeek-R1 outputs and matches the capabilities of OpenAI o1-mini. DeepSeek-R1 performs similarly to OpenAI-o1 in math, coding, and inference tasks. |
|
DeepSeek-R1-Distill-Qwen-32B |
Text generation |
Q&A and text generation inference |
Chinese and English |
CN-Hong Kong |
Qwen-32B is distilled from DeepSeek-R1 outputs and matches the capabilities of OpenAI o1-mini. DeepSeek-R1 performs similarly to OpenAI-o1 in math, coding, and inference tasks. |
|
Deepseek-Coder |
Text generation |
Q&A and text inference |
Chinese and English |
CN-Hong Kong |
DeepSeek Coder includes several code language models. Every model trains from scratch using 2 trillion tokens, with 87% being code and 13% English or Chinese text. It excels in multiple programming languages and performs well across various benchmarks among open-source code models. |
|
Qwen |
QwQ |
Text generation |
Q&A |
English |
CN-Hong Kong |
QwQ is part of the Tongyi series of inference models. Unlike standard instruction-tuning models, QwQ excels at thinking and reasoning, delivering better results on complex tasks. |
Qwen 2.5 |
Qwen 2.5 |
Text generation |
Multilingual processing, mathematical inference, and Q&A |
Chinese and English |
CN-Hong Kong |
Alibaba Cloud's Qwen 2.5 is a new addition to the Qwen series of LLMs. Qwen 2.5 offers various base and instruction-tuned language models, spanning sizes from 0.5 billion to 72 billion parameters. |
Qwen2.5-VL |
Image understanding |
Image understanding and Q&A |
Chinese and English |
CN-Hong Kong |
Qwen-2.5-VL is an open-source multimodal visual language model created by Alibaba Cloud's Qwen team. It excels in visual and language understanding. |
|
Qwen3 |
Qwen3 |
Text generation |
Q&A |
Chinese and English |
CN-Hong Kong |
The Qwen3 series includes LLMs and multimodal models created by the Qwen team. These models undergo extensive training using vast amounts of language and multimodal data, followed by fine-tuning with top-tier datasets. |
Kimi |
Kimi-K2 |
Text generation |
Q&A |
Chinese and English |
CN-Hong Kong |
Kimi K2 is a modern MoE language model featuring 32 billion activated parameters and 1 trillion total parameters. Trained using the Muon optimizer, it performs exceptionally well in advanced knowledge, reasoning, and programming tasks while enhancing its agent functionalities. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot