Obtaining Inference Service Information
Description
This API is used to obtain the information of ModelArts real-time inference services. When source is set to custom_from_modelarts_v2 during the creation of a custom endpoint, the inference service information is obtained.
Constraints
This function is only supported in CN-Hong Kong.
URI
GET /v1/{project_id}/maas/services/custom-endpoint/services/{region_id}?workspace_id={workspace_id} | Parameter | Mandatory | Type | Description |
|---|---|---|---|
| project-id | Yes | String | Definition: Project ID. For details about how to obtain the project ID, see Obtaining a Project ID and Name. Constraints: N/A. Range: N/A. Default Value: N/A. |
| region_id | Yes | String | Definition: Region information of ModelArts real-time services. For details about how to obtain the value, see Obtaining Region Information. Constraints: N/A. Range: N/A. Default Value: N/A. |
| workspace_id | No | String | Definition: ID of the workspace whose resources are to be queried. If no value is specified, the default workspace is queried. For details about how to obtain the value, see Obtaining Workspace Information. Constraints: N/A. Range: N/A. Default Value: N/A. |
Request Parameters
| Parameter | Mandatory | Type | Description |
|---|---|---|---|
| X-Auth-Token | Yes | String | Definition: User token. The token can be obtained by calling the IAM API used to obtain a user token. The value of X-Subject-Token in the response header is the user token. For details, see Authentication. Constraints: N/A. Range: N/A. Default Value: N/A. |
| Content-Type | Yes | String | Definition: Type of the message body. The value is fixed to application/json. Constraints: N/A. Range: N/A. Default Value: N/A. |
Response Parameters
| Parameter | Type | Description |
|---|---|---|
| data | Array[InferServerInfo] | Definition: Inference service information. Range: N/A. |
| pages | Integer | Definition: Total number of pages. Range: N/A. |
| total | Integer | Definition: Total number of inference information records. Range: N/A. |
| Parameter | Type | Description |
|---|---|---|
| id | String | Definition: Inference service ID. Range: N/A. |
| name | String | Definition: Inference service name. Range: N/A. |
| status | String | Definition: Inference service status. Range: N/A. |
| version | String | Definition: Version. Range: N/A. |
| version_count | String | Definition: Total number of versions. Range: N/A. |
| description | String | Definition: Inference service description. Range: N/A. |
| type | String | Definition: Inference service type. Range: N/A. |
| deploy_type | String | Definition: Deployment type. Range: N/A. |
| user_name | String | Definition: Username. Range: N/A. |
| workspace_id | String | Definition: Workspace. Range: N/A. |
| create_at | String | Definition: Creation time. Range: N/A. |
| update_at | String | Definition: Update time. Range: N/A. |
| auth_type | String | Definition: Authentication type. Range: N/A. |
| Parameter | Type | Description |
|---|---|---|
| error_msg | String | Definition: Error description. Range: N/A. |
| error_code | String | Definition: Error code, indicating the error type. Range: N/A. |
Request Example
GET
/v1/{project_id}/maas/services/custom-endpoint/services/{region_id}?workspace_id={workspace_id} Response Example
- Success response. Status code: 200.
{ "data": [ { "id": "add6b9f8-7e97-4f1c-8816-************", "name": "dpsk-v3_2-*****", "status": "RUNNING", "version": "0.0.9", "version_count": 10, "description": "DeepSeek-V3.2-EXP", "type": "REAL_TIME", "deploy_type": "MULTI", "user_name": "*****", "workspace_id": "0", "create_at": 1760538261106, "update_at": 1765711080510, "auth_type": "NONE" } ], "pages": 1, "total": 1, } - Error response. Status code: 400.
{ "error_msg": "Invalid token.", "error_code": "ModelArts.0104" }
Status Codes
For details, see Status Codes.
Error Codes
For details, see Error Codes.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot