Obtaining Details About a Service
You can use the API to obtain details about a service object.
Sample Code
In ModelArts notebook, you do not need to enter authentication parameters for session authentication. For details about session authentication of other development environments, see Session Authentication.
- Method 1: Obtain details about a service object created in Deploying a Real-Time Service.
1 2 3 4 5 6 7
from modelarts.session import Session from modelarts.model import Predictor session = Session() predictor_instance = Predictor(session, service_id="your_service_id") predictor_info = predictor_instance.get_service_info() print(predictor_info)
- Method 2: Obtain details about a service based on the service object returned in Obtaining Service Objects.
1 2 3 4 5 6 7 8
from modelarts.session import Session from modelarts.model import Predictor session = Session() predictor_object_list = Predictor.get_service_object_list(session) predictor_instance = predictor_object_list[0] predictor_info = predictor_instance.get_service_info() print(predictor_info)
Parameters
| Parameter | Type | Description |
|---|---|---|
| service_id | String | Service ID |
| service_name | String | Service name |
| description | String | Service description |
| tenant | String | Tenant to whom a service belongs |
| project | String | Project to which a service belongs |
| owner | String | User to whom a service belongs |
| publish_at | Number | Latest service publishing time, in milliseconds calculated from 1970.1.1 0:0:0 UTC |
| infer_type | String | Inference mode. The value can be real-time or batch. |
| vpc_id | String | ID of the VPC to which a service instance belongs. This parameter is returned when the network configuration is customized. |
| subnet_network_id | String | ID of the subnet where a service instance resides. This parameter is returned when the network configuration is customized. |
| security_group_id | String | Security group to which a service instance belongs. This parameter is returned when the network configuration is customized. |
| status | String | Service status. The value can be running, deploying, concerning, failed, stopped, or finished. |
| error_msg | String | Error message. When status is failed, the deployment failure cause is returned. |
| config | config array corresponding to infer_type | config array corresponding to infer_type Service configurations (If a service is shared, only model_id, model_name, and model_version are returned.) |
| access_address | String | Access address of an inference request. This parameter is returned when infer_type is set to real-time. |
| invocation_times | Number | Total number of service calls |
| failed_times | Number | Number of failed service calls |
| is_shared | Boolean | Whether a service is subscribed |
| shared_count | Number | Number of subscriptions |
| progress | Integer | Deployment progress. This parameter is returned when status is deploying. |
| Parameter | Type | Description |
|---|---|---|
| model_id | String | Model ID. You can obtain the value by calling the API described in Obtaining Models or from the ModelArts management console. |
| model_name | String | Model name |
| model_version | String | Model version |
| source_type | String | Model source. |
| status | String | Running status of a model instance. Possible values are as follows:
|
| weight | Integer | Traffic weight allocated to a model |
| specification | String | Resource flavor. The value can be modelarts.vm.cpu.2u, modelarts.vm.gpu.p4, or modelarts.vm.ai1.a310. |
| envs | Map<String, String> | Environment variable key-value pair required for running a model |
| instance_count | Integer | Number of instances deployed in a model |
| scaling | Boolean | Whether auto scaling is enabled |
| Parameter | Type | Description |
|---|---|---|
| model_id | String | Model ID. You can obtain the value by calling the API described in Obtaining Models or from the ModelArts management console. |
| model_name | String | Model name |
| model_version | String | Model version |
| specification | String | Resource flavor. The value can be modelarts.vm.cpu.2u or modelarts.vm.gpu.p4. |
| envs | Map<String, String> | Environment variable key-value pair required for running a model |
| instance_count | Integer | Number of instances deployed in a model |
| src_path | String | OBS path of the input data of a batch job |
| dest_path | String | OBS path of the output data of a batch job |
| req_uri | String | Inference path of a batch job |
| mapping_type | String | Mapping type of the input data. The value can be file or csv. |
| mapping_rule | Map | Mapping between input parameters and CSV data. This parameter is returned only when mapping_type is set to csv. |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.