Help Center> ModelArts> SDK Reference> Service Management> Deploying a Real-Time Service
Updated on 2024-03-21 GMT+08:00

Deploying a Real-Time Service

Real-time service deployment covers the following aspects:

  • Initialize a real-time service.
  • Deploy a real-time service predictor.
  • Deploy a batch service transformer.

The service object predictor is returned after deployment. The attributes of the service object include all functions described in this chapter.

Sample Code

In ModelArts notebook, you do not need to enter authentication parameters for session authentication. For details about session authentication of other development environments, see Session Authentication.

  • Method 1: Initialize the predictor that has been deployed as a real-time service.
    1
    2
    3
    4
    5
    from modelarts.session import Session
    from modelarts.model import Predictor
    
    session = Session()
    predictor_instance = Predictor(session, service_id="your_service_id")
    
  • Method 2: Deploy a real-time service predictor.
    • Deploy the service in a public resource pool.
       1
       2
       3
       4
       5
       6
       7
       8
       9
      10
      11
      12
      13
      14
      15
      16
      17
      18
      19
      20
      21
      22
      from modelarts.session import Session
      from modelarts.model import Model
      from modelarts.config.model_config import ServiceConfig, TransformerConfig, Schedule
      
      session = Session()
      model_instance = Model(session, model_id='your_model_id')
      vpc_id = None                                        # (Optional) ID of the VPC where the real-time service instance is deployed. This parameter is left blank by default.
      subnet_network_id = None                             # (Optional) Subnet ID. This parameter is left blank by default.
      security_group_id = None                             # (Optional) Security group. This parameter is left blank by default.
      configs = [ServiceConfig(model_id=model_instance.model_id,
                               weight="100",
                               instance_count=1,
                               specification="modelarts.vm.cpu.2u")]  # For details, see specification.
      predictor_instance = model_instance.deploy_predictor(
                  service_name="service_predictor_name",
                  infer_type="real-time",
                  vpc_id=vpc_id,
                  subnet_network_id=subnet_network_id,
                  security_group_id=security_group_id,
                  configs=configs,                       # predictor configuration parameter. For details, see configs.
                  schedule = [Schedule(op_type='stop', time_unit='HOURS', duration=1)]       # (Optional) Specify the runtime duration for a real-time service.
      )
      

      The model_id parameter specifies the model that is to be deployed as a real-time service. Obtain the value by calling the API described in Obtaining Models or from the ModelArts management console.

    • Deploy the service in a dedicated resource pool.
      from modelarts.config.model_config import ServiceConfig
      
      configs = [ServiceConfig(model_id=model_instance.model_id, weight="100", instance_count=1, 
      						 specification="modelarts.vm.cpu.2u")]
      predictor_instance = model_instance.deploy_predictor( 
                                                            service_name="your_service_name",
                                                            infer_type="real-time",
                                                            configs=configs,
                                                            cluster_id="your dedicated pool id"
                                                          )

    configs is defined by ServiceConfig in the SDK. The type of configs is list, and the tuple object in the list is ServiceConfig. The code is as follows:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    configs = []
    envs = {"model_name":"mxnet-model-1", "load_epoch":"0"}
    
    service_config1 = ServiceConfig(
            model_id="model_id1",                 # model_id1 and model_id2 must be the IDs of different versions of the same model.
            weight="70",
            specification="modelarts.vm.cpu.2u",  # For details, see specification.
            instance_count=2,
            envs=envs)                            # (Optional) Configure the environment variable, for example, envs = {"model_name":"mxnet-model-1", "load_epoch":"0"}.
    service_config2 = ServiceConfig(
            model_id='model_id2',
            weight="30",
            specification="modelarts.vm.cpu.2u",  # For details, see specification.
            instance_count=2,
            envs=envs)                            # (Optional) Configure the environment variable, for example, envs = {"model_name":"mxnet-model-1", "load_epoch":"0"}.
    configs.append(service_config1)
    configs.append(service_config2)
    
  • Method 3: Deploy a batch service transformer.
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    from modelarts.session import Session
    from modelarts.model import Model
    from modelarts.config.model_config import TransformerConfig
    
    session = Session()
    model_instance = Model(session, model_id='your_model_id')
    vpc_id = None                                        # (Optional) ID of the VPC where the batch service instance is deployed. This parameter is left blank by default.
    subnet_network_id = None                             # (Optional) Subnet ID. This parameter is left blank by default.
    security_group_id = None                             # (Optional) Security group. This parameter is left blank by default.
    
    transformer = model_instance.deploy_transformer(
            service_name="service_transformer_name",
            infer_type="batch",
            vpc_id=vpc_id,
            subnet_network_id=subnet_network_id,
            security_group_id=security_group_id,
            configs=configs                          # transformer configuration parameter. For details, see configs.
    )
    

    configs is defined by TransformerConfig in the SDK. The type of configs is list, and the tuple object in the list is TransformerConfig. The code is as follows:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    configs = []
    mapping_rule = None                               # (Optional) Mapping between input parameters and CSV data
    mapping_type= "file"                              # File or CSV
    envs = {"model_name":"mxnet-model-1", "load_epoch":"0"}
    
    transformer_config1 = TransformerConfig(
                model_id="model_id",
                specification="modelarts.vm.cpu.2u",   # For details, see specification.
                instance_count=2,
                src_path="/shp-cn4/sdk-demo/",         # OBS path to the input of the batch task, for example, /your_obs_bucket/src_path
                dest_path="/shp-cn4/data-out/",        # OBS path to the output of the batch task, for example, /your_obs_bucket/dest_path
                req_uri="/",
                mapping_type=mapping_type,
                mapping_rule=mapping_rule,
                envs=envs)                             # (Optional) Configure the environment variable, for example, envs = {"model_name":"mxnet-model-1", "load_epoch":"0"}.
    configs.append(transformer_config1)
    

Parameters

Table 1 Parameters

Parameter

Mandatory

Type

Description

service_id

Yes

String

Service ID, which can be obtained from the real-time service on the ModelArts management console

session

Yes

Object

Session object. For details about the initialization method, see Session Authentication.

Table 2 Parameters for deploying the predictor and transformer

Parameter

Mandatory

Type

Description

service_name

No

String

Name of a service that consists of 1 to 64 characters and must start with a letter. Only letters, digits, underscores (_), and hyphens (-) are allowed.

description

No

String

Service description, which contains a maximum of 100 characters. By default, this parameter is left blank.

infer_type

No

String

Inference mode. The value can be real-time or batch. The default value is real-time.

  • real-time: real-time service. A model is deployed as a web service and provides real-time test UI and monitoring capabilities. The service keeps running.
  • batch: batch service. A batch service can perform inference on batch data and automatically stops after data processing is completed.

vpc_id

No

String

ID of the VPC to which a real-time service instance is deployed. By default, this parameter is left blank. In this case, ModelArts allocates a dedicated VPC to each user, and users are isolated from each other. To access other service components in the VPC of the service instance, set this parameter to the ID of the corresponding VPC.

Once a VPC is configured, it cannot be modified. When vpc_id and cluster_id are configured, only the dedicated cluster parameter takes effect.

subnet_network_id

No

String

ID of a subnet. By default, this parameter is left blank. This parameter is mandatory when vpc_id is configured. Enter the network ID displayed in the subnet details on the VPC management console. A subnet provides dedicated network resources that are isolated from other networks.

security_group_id

No

String

Security group. By default, this parameter is left blank. This parameter is mandatory when vpc_id is configured. A security group is a virtual firewall that provides secure network access control policies for service instances. A security group must contain at least one inbound rule to permit the requests whose protocol is TCP, source address is 0.0.0.0/0, and port number is 8080.

configs

Yes

configs parameters of predictor and transformer

Model running configurations

  • When infer_type is set to batch, only one model can be configured.
  • When infer_type is set to real-time, you can configure multiple models and assign traffic weights based on service requirements. The version numbers of the models must be different.

schedule

No

schedule array

Service scheduling configuration, which can be configured only for real-time services. By default, this parameter is not used. Services run for a long time. For details, see Table 6.

cluster_id

No

String

ID of an old-version dedicated resource pool, which is left blank by default. If this parameter is configured, the service will be deployed in the specified old-version dedicated resource pool.

pool_name

No

String

Name of a new-version dedicated resource pool.

Table 3 configs parameters of predictor

Parameter

Mandatory

Type

Description

model_id

Yes

String

Model ID. Obtain the value by calling the API described in Obtaining Models or from the ModelArts management console.

weight

Yes

Integer

Weight of traffic allocated to a model. This parameter is mandatory only when infer_type is set to real-time. The sum of multiple weights must be equal to 100. If multiple model versions are configured in a real-time service and different traffic weights are set, ModelArts continuously accesses the prediction API of the service and forwards prediction requests to the model instances of the corresponding versions based on the weights.

{
    "service_name": "mnist",
    "description": "mnist service",
    "infer_type": "real-time",
    "config": [
        {
            "model_id": "xxxmodel-idxxx",
            "weight": "70",
            "specification": "modelarts.vm.cpu.2u",
            "instance_count": 1,
            "envs":
                {
                    "model_name": "mxnet-model-1",
                    "load_epoch": "0"
                }
        },
        {
            "model_id": "xxxxxx",
            "weight": "30",
            "specification": "modelarts.vm.cpu.2u",
            "instance_count": 1
        }
    ]
}

specification

Yes

String

Resource specifications. The options are modelarts.vm.cpu.2u, modelarts.vm.gpu.p4 (permission required), and modelarts.vm.ai1.a310 (permission required). For the options that require a permission, create a service ticket on Huawei Cloud. Then, ModelArts O&M personnel will add the permissions for you.

instance_count

Yes

Integer

Number of instances deployed in a model. The maximum number of instances is 5. To use more instances, submit a service ticket.

envs

No

Map<String, String>

(Optional) Environment variable key-value pair required for running a model. By default, this parameter is left blank.

Table 4 configs parameters of transformer

Parameter

Mandatory

Type

Description

model_id

Yes

String

Model ID

specification

Yes

String

Resource flavor. Currently, modelarts.vm.cpu.2u and modelarts.vm.gpu.p4 are available.

instance_count

Yes

Integer

Number of instances deployed in a model. The value range during the closed beta test is [1, 2].

envs

No

Map<String, String>

(Optional) Environment variable key-value pair required for running a model. By default, this parameter is left blank.

src_path

Yes

String

OBS path of the input data of a batch job

dest_path

Yes

String

OBS path of the output data of a batch job

req_uri

Yes

String

Inference API called in a batch task, that is, the RESTful API exposed in the model image. You must select an API URL from the config.json file of the model for inference. If a built-in inference image of ModelArts is used, the API is displayed as /.

mapping_type

Yes

String

Mapping type of the input data. The value can be file or csv.

  • If you select file, each inference request corresponds to a file in the input data path. When this mode is used, req_uri of a model can have only one input parameter and the type of this parameter is file.
  • If you select csv, each inference request corresponds to a row of data in the CSV file. When this mode is used, the files in the input data path can only be in CSV format and mapping_rule needs to be configured to map the index of each parameter in the inference request body to the CSV file.

The following shows how to create a batch service whose mapping_type is set to file:

{
"service_name": "batchservicetest",
"description": "",
"infer_type": "batch",
"config": [{
"model_id": "598b913a-af3e-41ba-a1b5-bf065320f1e2",
"specification": "modelarts.vm.cpu.2u",
"instance_count": 1,
"src_path": "https://infers-data.obs.xxx.com/xgboosterdata/",
"dest_path": "https://infers-data.obs.xxx.com/output/",
"req_uri": "/",
"mapping_type": "file"
}]
}

The following shows how to create a batch service whose mapping_type is set to csv:

{
    "service_name": "batchservicetest",
    "description": "",
    "infer_type": "batch",
    "config": [{
        "model_id": "598b913a-af3e-41ba-a1b5-bf065320f1e2",
        "specification": "modelarts.vm.cpu.2u",
        "instance_count": 1,
        "src_path": "https://infers-data.obs.xxx.com/xgboosterdata/",
        "dest_path": "https://infers-data.obs.xxx.com/output/",
        "req_uri": "/",
        "mapping_type": "csv",
        "mapping_rule": {
            "type": "object",
            "properties": {
                "data": {
                    "type": "object",
                    "properties": {
                        "req_data": {
                            "type": "array",
                            "items": [{
                                "type": "object",
                                "properties": {
                                    "input5": {
                                        "type": "number",
                                        "index": 0
                                    },
                                    "input4": {
                                        "type": "number",
                                        "index": 1
                                    },
                                    "input3": {
                                        "type": "number",
                                        "index": 2
                                    },
                                    "input2": {
                                        "type": "number",
                                        "index": 3
                                    },
                                    "input1": {
                                        "type": "number",
                                        "index": 4
                                    }
                                }
                            }]
                        }
                    }
                }
            }
        }
    }]
}

mapping_rule

No

Map

Mapping between input parameters and CSV data. This parameter is mandatory only when mapping_type is set to csv. The mapping rule is similar to the input parameter definition in the config.json model configuration file. You only need to configure the index parameters under each parameter of the string, number, integer, or boolean type, and the value of this parameter to the values of the index parameters in the CSV file to send an inference request. Use commas (,) to separate multiple pieces of CSV data. The values of the index parameters start from 0. If the value of the index parameter is -1, ignore this parameter. For details, see the sample code of deploying transformer.

The format of the inference request body described in mapping_rule is as follows:

{
    "data": {
        "req_data": [{
            "input1": 1,
            "input2": 2,
            "input3": 3,
            "input4": 4,
            "input5": 5
        }]
    }
}
Table 5 Parameters in the response to the request for deploying predictor and transformer

Parameter

Mandatory

Type

Description

predictor

Yes

Predictor object

Predictor object. Its attributes include all functions described in this chapter.

Table 6 schedule parameters

Parameter

Mandatory

Type

Description

op_type

Yes

String

Scheduling type. Currently, only the value stop is supported.

time_unit

Yes

String

Scheduling time unit. The options are as follows:

  • DAYS
  • HOURS
  • MINUTES

duration

Yes

Integer

Value that maps to the time unit. For example, if the task stops after two hours, set time_unit to HOURS and duration to 2.

  • Example of deploying a real-time predictor instance in the handwritten digit recognition project implemented by MXNet:
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    from modelarts.session import Session
    from modelarts.model import Model
    from modelarts.config.model_config import ServiceConfig, TransformerConfig
    
    model_instance = Model(session, model_id = "you_model_id")
    configs = []
    config1 = ServiceConfig(model_id="you_model_id", 
                            weight="100", 
                            instance_count=1, 
                            specification="modelarts.vm.cpu.2u",
                            envs={"input_data_name":"images",
                                  "input_data_shape":"0,1,28,28",
                                  "output_data_shape":"0,10"})
    configs.append(config1)
    predictor = model_instance.deploy_predictor(service_name="DigitRecognition", configs=configs)
    
  • Example of deploying a transformer instance (batch processing) in a handwritten digit recognition project implemented by MXNet:
     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    from modelarts.session import Session
    from modelarts.model import Model
    from modelarts.config.model_config import ServiceConfig, TransformerConfig
    
    model_instance = Model(session, model_id = "your_model_id") 
    configs = []
    config1 = TransformerConfig(model_id="your_model_id", 
                                specification="modelarts.vm.cpu.2u", 
                                instance_count=1, 
                                envs={"input_data_name":"images","input_data_shape":"0,1,28,28","output_data_shape":"0,10"},
                                src_path="/w0403/testdigitrecognition/inferimages/",
                                dest_path="/w0403/testdigitrecognition/" ,
                                req_uri = "/",
                                mapping_type = "file")
    configs.append(config1)
    predictor = model_instance.deploy_transformer(service_name="DigitRecognition", infer_type="batch", configs=configs)