Notes and Constraints
This section describes some limitations and constraints on using ModelArts.
Specifications Restrictions
Resource Type |
Specifications |
Description |
---|---|---|
Compute resources |
All compute resource specifications in pay-per-use, yearly/monthly, and package modes, including CPU, GPU, and NPU |
All types of purchased compute resources cannot be used across regions. |
Compute resources |
Package |
Packages are used only for public resource pools and cannot be used for dedicated resource pools. |
Quota Limits
You can log in to the console to view default quotas. For details, see Viewing Quotas.
Resource Type |
Default Quota |
Adjustable |
Description |
---|---|---|---|
ModelArts Standard notebook instance |
A maximum of 10 notebook instances can be created under one account. |
No |
For more information, see Creating a Notebook Instance. |
ModelArts Standard real-time service |
A maximum of 20 real-time services can be created under one account. |
Yes Submit a service ticket to increase the quota. |
For more information, see Deploying a Model as a Real-Time Service. |
ModelArts Standard batch service |
A maximum of 1,000 batch services can be created under one account. |
No |
For more information, see Deploying a Model as a Batch Inference Service. |
ModelArts Standard edge service |
A maximum of 1,000 edge services can be created under one account. |
No |
None |
ModelArts Standard dedicated resource pool |
A maximum of 50 dedicated resource pools can be created under one account. |
Yes Submit a service ticket to increase the quota. |
For more information, see Creating a Standard Dedicated Resource Pool. |
ModelArts Standard tag |
A maximum of 20 tags can be added to a training job, notebook instance, or real-time service. |
No |
For more information, see How Does ModelArts Use Tags to Manage Resources by Group? |
Constraints
Item |
Constraints |
---|---|
ModelArts Standard dedicated resource pool |
|
ModelArts Standard notebook instance |
|
ModelArts Standard training job |
|
Model Standard inference model |
|
ModelArts Standard inference service |
|
ModelArts Lite Server |
|
ModelArts Lite Cluster |
|
Interaction between ModelArts and OBS |
|
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot