Help Center/ ModelArts/ SDK Reference/ Service Management/ Obtaining Details About a Service
Updated on 2024-03-21 GMT+08:00

Obtaining Details About a Service

You can use the API to obtain details about a service object.

Sample Code

In ModelArts notebook, you do not need to enter authentication parameters for session authentication. For details about session authentication of other development environments, see Session Authentication.

  • Method 1: Obtain details about a service object created in Deploying a Real-Time Service.
    1
    2
    3
    4
    5
    6
    7
    from modelarts.session import Session
    from modelarts.model import Predictor
    
    session = Session()
    predictor_instance = Predictor(session, service_id="your_service_id")
    predictor_info = predictor_instance.get_service_info()
    print(predictor_info)
    
  • Method 2: Obtain details about a service based on the service object returned in Obtaining Service Objects.
    1
    2
    3
    4
    5
    6
    7
    8
    from modelarts.session import Session
    from modelarts.model import Predictor
    
    session = Session()
    predictor_object_list = Predictor.get_service_object_list(session)
    predictor_instance = predictor_object_list[0]                
    predictor_info = predictor_instance.get_service_info()
    print(predictor_info)
    

Parameters

Table 1 get_service_info response parameters

Parameter

Type

Description

service_id

String

Service ID

service_name

String

Service name

description

String

Service description

tenant

String

Tenant to whom a service belongs

project

String

Project to which a service belongs

owner

String

User to whom a service belongs

publish_at

Number

Latest service publishing time, in milliseconds calculated from 1970.1.1 0:0:0 UTC

infer_type

String

Inference mode. The value can be real-time or batch.

vpc_id

String

ID of the VPC to which a service instance belongs. This parameter is returned when the network configuration is customized.

subnet_network_id

String

ID of the subnet where a service instance resides. This parameter is returned when the network configuration is customized.

security_group_id

String

Security group to which a service instance belongs. This parameter is returned when the network configuration is customized.

status

String

Service status. The value can be running, deploying, concerning, failed, stopped, or finished.

error_msg

String

Error message. When status is failed, the deployment failure cause is returned.

config

config array corresponding to infer_type

config array corresponding to infer_type

Service configurations (If a service is shared, only model_id, model_name, and model_version are returned.)

access_address

String

Access address of an inference request. This parameter is returned when infer_type is set to real-time.

invocation_times

Number

Total number of service calls

failed_times

Number

Number of failed service calls

is_shared

Boolean

Whether a service is subscribed

shared_count

Number

Number of subscriptions

progress

Integer

Deployment progress. This parameter is returned when status is deploying.

Table 2 config parameters corresponding to real-time

Parameter

Type

Description

model_id

String

Model ID. You can obtain the value by calling the API described in Obtaining Models or from the ModelArts management console.

model_name

String

Model name

model_version

String

Model version

source_type

String

Model source. This parameter is returned when a model is created by an ExeML project. The value is auto.

status

String

Running status of a model instance. Possible values are as follows:

  • ready: ready (All instances have been started.)
  • concerning: partially ready (Some instances are started but some are not.)
  • notReady: not ready (All instances are not started.)

weight

Integer

Traffic weight allocated to a model

specification

String

Resource flavor. The value can be modelarts.vm.cpu.2u, modelarts.vm.gpu.p4, or modelarts.vm.ai1.a310.

envs

Map<String, String>

Environment variable key-value pair required for running a model

instance_count

Integer

Number of instances deployed in a model

scaling

Boolean

Whether auto scaling is enabled

Table 3 config parameters corresponding to batch

Parameter

Type

Description

model_id

String

Model ID. You can obtain the value by calling the API described in Obtaining Models or from the ModelArts management console.

model_name

String

Model name

model_version

String

Model version

specification

String

Resource flavor. The value can be modelarts.vm.cpu.2u or modelarts.vm.gpu.p4.

envs

Map<String, String>

Environment variable key-value pair required for running a model

instance_count

Integer

Number of instances deployed in a model

src_path

String

OBS path of the input data of a batch job

dest_path

String

OBS path of the output data of a batch job

req_uri

String

Inference path of a batch job

mapping_type

String

Mapping type of the input data. The value can be file or csv.

mapping_rule

Map

Mapping between input parameters and CSV data. This parameter is returned only when mapping_type is set to csv.