Updated on 2025-03-14 GMT+08:00

Deploying Services

Function

This API is used to deploy a model as a service.

Debugging

You can debug this API through automatic authentication in API Explorer or use the SDK sample code generated by API Explorer.

URI

POST /v1/{project_id}/services

Table 1 Path Parameters

Parameter

Mandatory

Type

Description

project_id

Yes

String

Project ID. For details, see Obtaining a Project ID and Name.

Request Parameters

Table 2 Request header parameters

Parameter

Mandatory

Type

Description

X-Auth-Token

Yes

String

User token. It can be obtained by calling the IAM API that is used to obtain a user token. The value of X-Subject-Token in the response header is the user token.

Table 3 Request body parameters

Parameter

Mandatory

Type

Description

workspace_id

No

String

ID of the workspace to which the service belongs. If no workspace is created, the default value is 0. If a workspace is created and used, the actual value prevails.

schedule

No

Array of Schedule objects

Service scheduling configuration, which can be configured only for real-time services. By default, this parameter is not used. Services run for a long time.

cluster_id

No

String

Resource pool ID used for deploying a service. This parameter is optional. For real-time and batch services, the value is the ID of the old-version dedicated resource pool. After this parameter is configured, the network configuration of the cluster is used and the vpc_id parameter does not take effect. Ensure that the cluster status is normal when using a dedicated resource pool to deploy a service. If you want to use a dedicated resource pool, configure either cluster_id or pool_name. The priority of pool_name is higher than that of cluster_id. If neither of them is configured, a shared resource pool is used. When they are configured together with cluster_id or pool_name in the config field, the setting in config is preferentially used. For edge services, the value is the edge resource pool ID. To use an edge resource pool to deploy a service, ensure that the resource pool status is normal. If this parameter is configured together with the cluster_id parameter in the config field, the setting in config is preferentially used.

pool_name

No

String

Resource pool ID of the elastic cluster in the AI dedicated resource pool used for service deployment. This parameter is optional for real-time and batch services. Ensure that the cluster status is normal when using a dedicated resource pool to deploy services. If this parameter is configured together with cluster_id or pool_name in config, the config settings are preferentially used. To use a dedicated resource pool, you need to configure either cluster_id or pool_name. The priority of pool_name is higher than that of cluster_id. If neither of them is configured, the shared resource pool is used.

infer_type

Yes

String

Inference type. The value can be real-time, edge, or batch.

  • real-time: real-time service. A model is deployed as a web service

  • batch: batch service. A batch service can perform inference on batch data and automatically stops after data processing is completed.

  • [edge: edge service. A model is deployed as a web service on an edge node through Intelligent EdgeFabric (IEF).

that provides a real-time test UI and monitoring capabilities. The service keeps running.

You need to create a node on IEF beforehand.] (tag:hc,hk)

vpc_id

No

String

ID of the VPC to which a real-time service instance is deployed. By default, this parameter is left blank. In this case, ModelArts allocates a dedicated VPC to each user, and users are isolated from each other. To access other service components in the VPC of the service instance, set this parameter to the ID of the corresponding VPC. Once a VPC is configured, it cannot be modified. If both vpc_id and cluster_id are configured, only the dedicated resource pool takes effect.

service_name

Yes

String

Service name, which consists of 1 to 64 characters. Only letters, digits, hyphens (-), and underscores (_) are allowed.

description

No

String

Service description, which is empty by default. The value can contain a maximum of 100 characters and cannot contain the following special characters: !<>+&"'

security_group_id

No

String

Security group. By default, this parameter is left blank. This parameter is mandatory if vpc_id is configured. A security group is a virtual firewall that provides secure network access control policies for service instances. A security group must contain at least one inbound rule to permit the requests whose protocol is TCP, source address is 0.0.0.0/0, and port number is 8080.

subnet_network_id

No

String

ID of a subnet. By default, this parameter is left blank. This parameter is mandatory if vpc_id is configured. Enter the network ID displayed in the subnet details on the VPC management console. A subnet provides dedicated network resources that are isolated from other networks.

config

Yes

Array of ServiceConfig objects

Model running configurations. If infer_type is batch or edge, you can configure only one model. If infer_type is real-time, you can configure multiple models and assign weights based on service requirements. However, the versions of multiple models must be unique.

additional_properties

No

Map<String,ServiceAdditionalProperties>

Additional service attribute, which facilitates service management

load_balancer_policy

No

String

Backend ELB forwarding policy that can be set only for synchronous real-time services. The value can be ROUND_ROBIN (weighted round robin), LEAST_CONNECTIONS (weighted least connections), or SOURCE_IP (source IP address algorithm).

service_secrets

No

Array of ServiceSecret objects

Specifies the list of keys mounted to the service.

priority

No

Integer

Preemption priority. The value ranges[1,3]. The scheduling of high-priority services is guaranteed by setting the preemption priority. When infer_type is set to real-time or batch, the preemption priority can be set.

Table 4 Schedule

Parameter

Mandatory

Type

Description

duration

Yes

Integer

Value mapping a time unit. For example, if the task stops after two hours, set time_unit to HOURS and duration to 2.

time_unit

Yes

String

Scheduling time unit. Possible values are DAYS, HOURS, and MINUTES.

type

Yes

String

Scheduling type. Currently, the value can only be stop, indicating that the task automatically stops after a specified period of time.

Table 5 ServiceConfig

Parameter

Mandatory

Type

Description

custom_spec

No

CustomSpec object

Customized resource specification configuration. This parameter is returned only when specification is set to custom.

envs

No

Map<String,String>

Common parameter. (Optional) Environment variable key-value pair required for running a model. By default, this parameter is left blank.

specification

Yes

String

Common parameter. Resource flavor. You can query the supported service deployment flavors. The current version supports modelarts.vm.cpu.2u, modelarts.vm.gpu.pnt004 (must be requested), modelarts.vm.ai1.snt3 (must be requested), and custom (available only when the service is deployed in a dedicated resource pool). To request a flavor, submit a service ticket and obtain permissions from ModelArts O&M engineers. If this parameter is set to custom, the custom_spec parameter must be specified.

weight

No

Integer

This parameter is mandatory for real-time. Weight of traffic allocated to a model. This parameter is mandatory only when infer_type is set to real-time. The sum of all weights must be equal to 100. If multiple model versions are configured with different traffic weights in a real-time service, ModelArts will continuously access the prediction API of the service and forward prediction requests to the model instances of the corresponding versions based on the weights.

deploy_timeout_in_seconds

No

Integer

Timeout interval for deploying a single model instance

model_id

Yes

String

Common parameters Model ID. You can obtain the value by calling the API for querying the AI application list.

src_path

No

String

Mandatory for batch services. OBS path to the input data of a batch job

req_uri

No

String

Mandatory for batch services. Inference API called in a batch task, which is the RESTful API exposed in the model image. You must select an API URL from the config.json file of the model for inference. If a built-in inference image of ModelArts is used, the API is displayed as /.

mapping_type

No

String

The batch service type is mandatory. Mapping type of the input data. The value can be file or csv.

  • If file is selected, each inference request corresponds to a file in the input data directory. When this mode is used, req_uri corresponding to the model can have only one input parameter and the parameter type is file.

  • If csv is selected, each inference request corresponds to a row of data in the CSV file. If this mode is used, the file name extension in the input data directory must be .csv, and the mapping_rule parameter must be configured to indicate the CSV index corresponding to each parameter in the inference request body.

cluster_id

No

String

Resource pool ID used for deploying a service. This parameter is optional. For real-time and batch services, the value is the ID of the old-version dedicated resource pool. After this parameter is configured, the network configuration of the cluster is used and the vpc_id parameter does not take effect. When using a dedicated resource pool to deploy a service, ensure that the cluster status is normal and this parameter has a higher priority than cluster_id. When setting this parameter, you also need to set the cluster_id or pool_name parameter of the service level, and the priority of this parameter is higher than that of the cluster_id and pool_name parameters of the service level. If neither of the cluster_id and pool_name parameters is configured in config, the cluster_id and pool_name parameters of the service level are used. If none of them is configured, the shared resource pool is used. For edge services, the value is the edge resource pool ID. When using an edge resource pool to deploy a service, ensure that the resource pool status is normal. When setting this parameter, you also need to set the cluster_id parameter of the service level, and the priority of this parameter is higher than that of the cluster_id parameter of the service level. If this parameter is not set, the cluster_id parameter of the service level is used.

pool_name

No

String

Resource pool ID of the elastic cluster in the AI dedicated resource pool used for service deployment. When using a dedicated resource pool to deploy a service, ensure that the cluster status is normal and this parameter has a higher priority than cluster_id. When setting this parameter, you also need to set cluster_id or pool_name of the service level, and the priority of this parameter is higher than that of cluster_id and pool_name of the service level. If neither cluster_id nor pool_name is configured in config, the cluster_id and pool_name parameters of the service level are used. If none of them is configured, the shared resource pool is used. This parameter is optional for real-time and batch services.

nodes

No

Array of strings

Edge node ID array. The node ID is the edge node ID on IEF, which can be obtained after the edge node is created on IEF. This parameter is optional for edge services.

mapping_rule

No

Object

Optional for batch services. Mapping between input parameters and CSV data. This parameter is mandatory only when mapping_type is set to csv. The mapping rule is similar to the definition of the input parameters in the config.json file. You only need to configure the index parameters under each parameter of the string, number, integer, or boolean type, and specify the value of this parameter to the values of the index parameters in the CSV file to send an inference request. Use commas (,) to separate multiple pieces of CSV data. The values of the index parameters start from 0. If the value of the index parameter is -1, ignore this parameter. For details, see the sample of creating a batch service.

src_type

No

String

Mandatory for batch services. Data source type, which can be ManifestFile. By default, this parameter is left blank, indicating that only files in the src_path directory are read. If this parameter is set to ManifestFile, src_path must be set to a specific manifest path. Multiple data paths can be specified in the manifest file. For details, see the manifest inference specifications.

dest_path

No

String

Mandatory for batch services. OBS path to the output data of a batch job

instance_count

Yes

Integer

Common parameter. Number of instances deployed in a model. The maximum number of instances is 128. To use more instances, submit a service ticket.

additional_properties

No

Map<String,ModelAdditionalProperties>

Additional attributes for model deployment, facilitating service instance management

affinity

No

ServiceAffinity object

Service Affinity Deployment

Table 6 CustomSpec

Parameter

Mandatory

Type

Description

gpu_p4

No

Float

Number of GPUs, which can be a decimal. The value cannot be smaller than 0, with the third decimal place is rounded off. This parameter is optional and is not used by default.

memory

Yes

Integer

Memory in MB, which must be an integer

cpu

Yes

Float

Number of CPU cores, which can be a decimal. The value cannot be smaller than 0.01, with the third decimal place is rounded off.

ascend_a310

No

Integer

Number of Ascend chips. This parameter is optional and is not used by default. Either this parameter or gpu is configured.

Table 7 ModelAdditionalProperties

Parameter

Mandatory

Type

Description

log_volume

No

Array of LogVolume objects

Host directory mounting.

This parameter takes effect only if a dedicated resource pool is used. If a public resource pool is used to deploy services, this parameter cannot be configured. Otherwise, an error will occur.

max_surge

No

Float

The value must be greater than 0. If this parameter is not set, the default value 1 is used. If the value is less than 1, it indicates the percentage of instances to be added during the rolling upgrade. If the value is greater than 1, it indicates the maximum number of instances to be added during the rolling upgrade.

max_unavailable

No

Float

The value must be greater than 0. If this parameter is not set, the default value 0 is used. If the value is less than 1, it indicates the percentage of instances that can be scaled in during the rolling upgrade. If the value is greater than 1, it indicates the number of instances that can be scaled in during the rolling upgrade.

termination_grace_period_seconds

No

Integer

Graceful stop period of a container.

persistent_volumes

No

Array of PersistentVolumes objects

Persistent storage mounting.

Table 8 LogVolume

Parameter

Mandatory

Type

Description

host_path

Yes

String

Log path to be mapped on the host

mount_path

Yes

String

Path to the logs in the container

Table 9 PersistentVolumes

Parameter

Mandatory

Type

Description

name

No

String

Volume name.

mount_path

Yes

String

Mount path of a volume in the container. Example: /tmp. The container path must not be a system directory, such as / and /var/run. Otherwise, an exception occurs. It is a good practice to mount the container to an empty directory. If the directory is not empty, ensure that there are no files affecting container startup in the directory. Otherwise, such files will be replaced, resulting in failures to start the container and create the workload.

storage_type

No

String

Mount type: sfs_turbo.

source_address

No

String

Specifies the mounting source path. The value is the SFS Turbo ID when an EFS file is mounted.

Table 10 ServiceAffinity

Parameter

Mandatory

Type

Description

node_affinity

No

NodeAffinity object

Set this parameter when node affinity is used.

Table 11 NodeAffinity

Parameter

Mandatory

Type

Description

mode

Yes

String

Node affinity mode. The value required indicates strong affinity. A service instance can be scheduled only to a specified node. If the specified node does not exist, the scheduling fails. preferred indicates weak affinity. A service instance tends to be scheduled to a specified node. If the specified node does not meet the scheduling conditions, the service instance will be scheduled to another node.

pool_infos

No

Array of AffinityPoolInfo objects

Configure an affinity policy for a specified cluster and specify the nodes in the cluster.

Table 12 AffinityPoolInfo

Parameter

Mandatory

Type

Description

pool_name

Yes

String

Cluster name. The cluster name must be in the outer pool_name.

nodes

Yes

Array of AffinityNodeInfo objects

Affinity Node List

Table 13 AffinityNodeInfo

Parameter

Mandatory

Type

Description

name

Yes

String

Node name, which corresponds to the private IP address of the node.

Table 14 ServiceAdditionalProperties

Parameter

Mandatory

Type

Description

smn_notification

Yes

Map<String,SmnNotification>

SMN message notification structure, which is used to notify the user of the service status change

log_report_channels

No

Array of LogReportPipeline objects

Log channel group. If this parameter is not specified or the array length is 0, LTS log interconnection is disabled. This function cannot be modified after being enabled.

websocket_upgrade

No

Boolean

Whether the service interface is upgraded to WebSocket. During service deployment, the default value is false. During service configuration update, the default value is the value set last time.

  • false: Do not upgrade to WebSocket.

  • true: Upgrade to WebSocket. This parameter cannot be modified after WebSocket is enabled. WebSocket cannot be enabled together with Traffic Limit.

Table 15 SmnNotification

Parameter

Mandatory

Type

Description

topic_urn

Yes

String

URN of an SMN topic

events

Yes

Array of integers

Event ID. The options are as follows:

  • 1: failed

  • 3: running

  • 7: concerning

  • 11: pending

Table 16 LogReportPipeline

Parameter

Mandatory

Type

Description

type

Yes

String

Log pipeline type. Currently, only LTS is supported.

configuration

No

LtsConfiguration object

LTS log configuration.

Table 17 LtsConfiguration

Parameter

Mandatory

Type

Description

log_group_id

Yes

String

LTS log group ID. The value contains 64 characters.

log_stream_id

Yes

String

LTS log stream ID. The value contains 64 characters.

Table 18 ServiceSecret

Parameter

Mandatory

Type

Description

secretId

Yes

String

Specifies the key ID.

mouthPath

Yes

String

Mount Path.

Response Parameters

Status code: 200

Table 19 Response body parameters

Parameter

Type

Description

service_id

String

Service ID

resource_ids

Array of strings

Resource ID array for the resource IDs generated by the target model

Example Requests

  • Sample request of creating a real-time service

    POST https://{endpoint}/v1/{project_id}/services
    
    {
      "infer_type" : "real-time",
      "service_name" : "mnist",
      "description" : "mnist service",
      "config" : [ {
        "specification" : "modelarts.vm.cpu.2u",
        "weight" : 100,
        "model_id" : "0e07b41b-173e-42db-8c16-8e1b44cc0d44",
        "instance_count" : 1
      } ]
    }
  • Create a real-time service and configure multi-version traffic distribution.

    POST https://{endpoint}/v1/{project_id}/services
    
    {
      "service_name" : "mnist",
      "description" : "mnist service",
      "infer_type" : "real-time",
      "config" : [ {
        "model_id" : "xxxmodel-idxxx",
        "weight" : "70",
        "specification" : "modelarts.vm.cpu.2u",
        "instance_count" : 1,
        "envs" : {
          "model_name" : "mxnet-model-1",
          "load_epoch" : "0"
        }
      }, {
        "model_id" : "xxxxxx",
        "weight" : "30",
        "specification" : "modelarts.vm.cpu.2u",
        "instance_count" : 1
      } ]
    }
  • Create a real-time service in a dedicated resource pool with custom specifications.

    POST https://{endpoint}/v1/{project_id}/services
    
    {
      "service_name" : "realtime-demo",
      "description" : "",
      "infer_type" : "real-time",
      "cluster_id" : "8abf68a969c3cb3a0169c4acb24b0000",
      "config" : [ {
        "model_id" : "eb6a4a8c-5713-4a27-b8ed-c7e694499af5",
        "weight" : "100",
        "cluster_id" : "8abf68a969c3cb3a0169c4acb24b0000",
        "specification" : "custom",
        "custom_spec" : {
          "cpu" : 1.5,
          "memory" : 7500
        },
        "instance_count" : 1
      } ]
    }
  • Create a real-time service and configure it to automatically stop.

    POST https://{endpoint}/v1/{project_id}/services
    
    {
      "service_name" : "service-demo",
      "description" : "demo",
      "infer_type" : "real-time",
      "config" : [ {
        "model_id" : "xxxmodel-idxxx",
        "weight" : "100",
        "specification" : "modelarts.vm.cpu.2u",
        "instance_count" : 1
      } ],
      "schedule" : [ {
        "type" : "stop",
        "time_unit" : "HOURS",
        "duration" : 1
      } ]
    }
  • Create a batch service and set mapping_type to file.

    POST https://{endpoint}/v1/{project_id}/services
    
    {
      "service_name" : "batchservicetest",
      "description" : "",
      "infer_type" : "batch",
      "cluster_id" : "8abf68a969c3cb3a0169c4acb24b****",
      "config" : [ {
        "model_id" : "598b913a-af3e-41ba-a1b5-bf065320f1e2",
        "specification" : "modelarts.vm.cpu.2u",
        "instance_count" : 1,
        "src_path" : "https://infers-data.obs.xxxxx.com/xgboosterdata/",
        "dest_path" : "https://infers-data.obs.xxxxx.com/output/",
        "req_uri" : "/",
        "mapping_type" : "file"
      } ]
    }
  • Create a batch service and set mapping_type to csv.

    POST https://{endpoint}/v1/{project_id}/services
    
    {
      "service_name" : "batchservicetest",
      "description" : "",
      "infer_type" : "batch",
      "config" : [ {
        "model_id" : "598b913a-af3e-41ba-a1b5-bf065320f1e2",
        "specification" : "modelarts.vm.cpu.2u",
        "instance_count" : 1,
        "src_path" : "https://infers-data.obs.xxxxx.com/xgboosterdata/",
        "dest_path" : "https://infers-data.obs.xxxxx.com/output/",
        "req_uri" : "/",
        "mapping_type" : "csv",
        "mapping_rule" : {
          "type" : "object",
          "properties" : {
            "data" : {
              "type" : "object",
              "properties" : {
                "req_data" : {
                  "type" : "array",
                  "items" : [ {
                    "type" : "object",
                    "properties" : {
                      "input5" : {
                        "type" : "number",
                        "index" : 0
                      },
                      "input4" : {
                        "type" : "number",
                        "index" : 1
                      },
                      "input3" : {
                        "type" : "number",
                        "index" : 2
                      },
                      "input2" : {
                        "type" : "number",
                        "index" : 3
                      },
                      "input1" : {
                        "type" : "number",
                        "index" : 4
                      }
                    }
                  } ]
                }
              }
            }
          }
        }
      } ]
    }
  • Create an edge service.

    POST https://{endpoint}/v1/{project_id}/services
    
    {
      "service_name" : "service-edge-demo",
      "description" : "",
      "infer_type" : "edge",
      "config" : [ {
        "model_id" : "eb6a4a8c-5713-4a27-b8ed-c7e694499af5",
        "specification" : "custom",
        "instance_count" : 1,
        "custom_spec" : {
          "cpu" : 1.5,
          "memory" : 7500
        },
        "envs" : { },
        "nodes" : [ "2r8c4fb9-t497-40u3-89yf-skui77db0472" ]
      } ]
    }

Example Responses

Status code: 200

Service deployed

{
  "service_id" : "10eb0091-887f-4839-9929-cbc884f1e20e",
  "resource_ids" : [ "INF-f878991839647358@1598319442708" ]
}

Status Codes

Status Code

Description

200

Service deployed

Error Codes

See Error Codes.