Help Center> ModelArts> API Reference> Service Management> Updating Service Configurations

Updating Service Configurations

Function

This API is used to update configurations of a model service. It can also be used to start or stop a service.

URI

PUT /v1/{project_id}/services/{service_id}

Table 1 describes the required parameters.
Table 1 Parameter description

Parameter

Mandatory

Type

Description

project_id

Yes

String

Project ID. For details about how to obtain the project ID, see Obtaining a Project ID.

service_id

Yes

String

Service ID

Request Body

Table 2 describes the request parameters.
Table 2 Parameter description

Parameter

Mandatory

Type

Description

description

No

String

Service description, which contains a maximum of 100 characters. If this parameter is not set, the service description is not updated.

status

No

String

Service status. The value can be running or stopped. If this parameter is not set, the service status is not changed. status and config cannot be modified at the same time. If both parameters exist, modify only the status parameter.

config

No

config array corresponding to infer_type

Service configuration. If this parameter is not set, the service is not updated. The model service is modified and the update_time parameter is returned only for requests with config updated.

schedule

No

schedule array

Service scheduling configuration, which can be configured only for real-time services. By default, this parameter is not used. Services run for a long time. For details, see Table 6.

additional_properties

No

Map<String, Object>

Additional service attribute, which facilitates service management

Table 3 config parameters of real-time

Parameter

Mandatory

Type

Description

model_id

Yes

String

Model ID

weight

Yes

Integer

Traffic weight allocated to a model. This parameter is mandatory only when infer_type is set to real-time. The sum of the weights must be 100.

specification

Yes

String

Resource specifications. Select specifications based on service requirements. For the current version, the following specifications are available:

  • modelarts.vm.cpu.2u
  • modelarts.vm.gpu.0.25p4
  • modelarts.vm.gpu.0.5p4
  • modelarts.vm.gpu.p4
  • modelarts.vm.gpu.0.25t4
  • modelarts.vm.gpu.0.5t4
  • modelarts.vm.gpu.t4
  • modelarts.vm.arm.d310.3u6g
  • modelarts.vm.ai1.a310
  • modelarts.vm.cpu.free
  • modelarts.vm.gpu.free

instance_count

Yes

Integer

Number of instances deployed in a model

envs

No

Map<String, String>

(Optional) Environment variable key-value pair required for running a model. By default, this parameter is left blank.

additional_properties

No

Map<String, Object>

Additional mode attribute, which facilitates service instance management

Table 4 config parameters of batch

Parameter

Mandatory

Type

Description

model_id

Yes

String

Model ID

specification

Yes

String

Resource flavor. Available flavors: modelarts.vm.cpu.2u and modelarts.vm.gpu.p4

instance_count

Yes

Integer

Number of instances deployed in a model

envs

No

Map<String, String>

(Optional) Environment variable key-value pair required for running a model

src_path

Yes

String

OBS path of the input data of a batch job

dest_path

Yes

String

OBS path of the output data of a batch job

req_uri

Yes

String

Inference path of a batch job. The input parameters and input data vary with the inference path.

mapping_type

Yes

String

Mapping type of the input data. The value can be file or csv.

  • If you select file, each inference request corresponds to a file in the input data path. When this mode is used, req_uri of this model can have only one input parameter and the type of this parameter is file.
  • If you select csv, each inference request corresponds to a row of data in the CSV file. When this mode is used, the files in the input data path can only be in CSV format and mapping_rule needs to be configured to map the index of each parameter in the inference request body to the CSV file.

mapping_rule

No

Map

Mapping between input parameters and CSV data. This parameter is mandatory only when mapping_type is set to csv.

The mapping rule is similar to the definition of the input parameters in the config.json file. You only need to configure the index parameter under each parameter of the string, number, integer, or boolean type and specify the index data in the CSV file as the value of this parameter to send an inference request. Use commas (,) to separate multiple pieces of CSV data. The value of index starts from 0. When the value of index is -1, ignore this parameter. For details, see the sample of creating a batch service.

Table 5 config parameters of edge

Parameter

Mandatory

Type

Description

model_id

Yes

String

Model ID, which cannot be modified currently

specification

Yes

String

Resource specifications, which cannot be modified currently

envs

No

Map<String, String>

(Optional) Environment variable key-value pair required for running a model, which cannot be modified currently

nodes

Yes

String array

Edge node ID array

Table 6 schedule parameters

Parameter

Mandatory

Type

Description

type

Yes

String

Scheduling type. Currently, only the value stop is supported.

time_unit

Yes

String

Scheduling time unit. Possible values are DAYS, HOURS, and MINUTES.

duration

Yes

Integer

Value that maps to the time unit. For example, if the task stops after two hours, set time_unit to HOURS and duration to 2.

Response Body

None

Samples

The following shows how to update a real-time service.

  • Sample request
    PUT    https://endpoint/v1/{project_id}/services/{service_id}
    {
        "description": "",
        "status": "running",
        "config": [{
            "model_id": "xxxx",
            "weight": "100",
            "specification": "modelarts.vm.cpu.2u",
            "instance_count": 1
        }]
    }
  • Sample response
    {}

Status Code

For details about the status code, see Table 1.