Deploying Services
Function
This API is used to deploy a model as a service.
Debugging
You can debug this API through automatic authentication in API Explorer or use the SDK sample code generated by API Explorer.
URI
POST /v1/{project_id}/services
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
project_id |
Yes |
String |
Project ID. For details, see Obtaining a Project ID and Name. |
Request Parameters
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
X-Auth-Token |
Yes |
String |
User token. It can be obtained by calling the IAM API that is used to obtain a user token. The value of X-Subject-Token in the response header is the user token. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
workspace_id |
No |
String |
ID of the workspace to which a service belongs. The default value is 0, indicating the default workspace. |
schedule |
No |
Array of Schedule objects |
Service scheduling configuration, which can be configured only for real-time services. By default, this parameter is not used. Services run for a long time. |
cluster_id |
No |
String |
Dedicated resource pool ID. By default, this parameter is left blank, indicating that dedicated resource pools are not used. When using a dedicated resource pool to deploy services, ensure that the cluster is running properly. After this parameter is configured, the network configuration of the cluster is used, and the vpc_id parameter does not take effect. If both this parameter and cluster_id in real-time config are configured, cluster_id in real-time config is preferentially used. If a dedicated resource pool is used, either cluster_id or pool_name must be specified. |
pool_name |
No |
String |
ID of a new-version dedicated resource pool. This parameter is left blank by default, indicating that no dedicated resource pool is used. When using a dedicated resource pool to deploy services, ensure that the cluster is running properly. If both this parameter and pool_name in real-time config are configured, pool_name in real-time config is preferentially used. If a dedicated resource pool is used, either cluster_id or pool_name must be specified. |
infer_type |
Yes |
String |
Inference mode. The value is real-time/batch/edge.
|
vpc_id |
No |
String |
ID of the VPC to which a real-time service instance is deployed. By default, this parameter is left blank. In this case, ModelArts allocates a dedicated VPC to each user, and users are isolated from each other. To access other service components in the VPC of the service instance, set this parameter to the ID of the corresponding VPC. Once a VPC is configured, it cannot be modified. If both vpc_id and cluster_id are configured, only the dedicated resource pool takes effect. |
service_name |
Yes |
String |
Service name, which consists of 1 to 64 characters. Only letters, digits, hyphens (-), and underscores (_) are allowed. |
description |
No |
String |
Service remarks. By default, this parameter is left blank. The value contains a maximum of 100 characters and cannot contain!. < > + &"' |
security_group_id |
No |
String |
Security group. By default, this parameter is left blank. This parameter is mandatory if vpc_id is configured. A security group is a virtual firewall that provides secure network access control policies for service instances. A security group must contain at least one inbound rule to permit the requests whose protocol is TCP, source address is 0.0.0.0/0, and port number is 8080. |
subnet_network_id |
No |
String |
ID of a subnet. By default, this parameter is left blank. This parameter is mandatory if vpc_id is configured. Enter the network ID displayed in the subnet details on the VPC management console. A subnet provides dedicated network resources that are isolated from other networks. |
config |
Yes |
Array of ServiceConfig objects |
Model running configurations. If infer_type is batch or edge, you can configure only one model. If infer_type is real-time, you can configure multiple models and assign weights based on service requirements. However, the versions of multiple models must be unique. |
additional_properties |
No |
Map<String,ServiceAdditionalProperties> |
Additional service attribute, which facilitates service management |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
duration |
Yes |
Integer |
Value mapping a time unit. For example, if the task stops after two hours, set time_unit to HOURS and duration to 2. |
time_unit |
Yes |
String |
Scheduling time unit. Possible values are DAYS, HOURS, and MINUTES. |
type |
Yes |
String |
Scheduling type. Only the value stop is supported. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
custom_spec |
No |
CustomSpec object |
Custom resource specifications |
envs |
No |
Map<String,String> |
Common parameter. (Optional) Environment variable key-value pair required for running a model. By default, this parameter is left blank. |
specification |
Yes |
String |
Common parameter. Resource flavor. You can query the supported service deployment flavors. The current version supports modelarts.vm.cpu.2u, modelarts.vm.gpu.pnt004 (must be requested), modelarts.vm.ai1.snt3 (must be requested), and custom (available only when the service is deployed in a dedicated resource pool). To request a flavor, submit a service ticket and obtain permissions from ModelArts O&M engineers. If this parameter is set to custom, the custom_spec parameter must be specified. |
weight |
No |
Integer |
This parameter is mandatory for real-time. Weight of traffic allocated to a model. This parameter is mandatory only when infer_type is set to real-time. The sum of all weights must be equal to 100. If multiple model versions are configured with different traffic weights in a real-time service, ModelArts will continuously access the prediction API of the service and forward prediction requests to the model instances of the corresponding versions based on the weights. |
deploy_timeout_in_seconds |
No |
Integer |
Timeout interval for deploying a single model instance |
model_id |
Yes |
String |
Common parameters Model ID. You can obtain the value by calling the API for querying the AI application list. |
src_path |
No |
String |
Mandatory for batch services. OBS path to the input data of a batch job |
req_uri |
No |
String |
Mandatory for batch services. Inference API called in a batch task, which is the RESTful API exposed in the model image. You must select an API URL from the config.json file of the model for inference. If a built-in inference image of ModelArts is used, the API is displayed as /. |
mapping_type |
No |
String |
The batch service type is mandatory. Mapping type of the input data. The value can be file or csv.
|
cluster_id |
No |
String |
Optional for real-time services. ID of a dedicated resource pool. This parameter is left blank by default, indicating that no dedicated resource pool is used. When using a dedicated resource pool to deploy services, ensure that the resource pool is running properly. After this parameter is configured, the network configuration of the cluster is used, and the vpc_id parameter does not take effect. |
pool_name |
No |
String |
Specifies the ID of the new dedicated resource pool. By default, this parameter is left blank, indicating that the dedicated resource pool is not used. This parameter corresponds to the ID of the new resource pool. When using dedicated resource pool to deploy services, ensure that the cluster status is normal. If pool_name in real-time config and pool_name in real-time config are configured at the same time, pool_name in real-time config is preferred. |
nodes |
No |
Array of strings |
Mandatory for edge services. Edge node ID array. The node ID is the edge node ID on IEF, which can be obtained after the edge node is created on IEF. |
mapping_rule |
No |
Object |
Optional for batch services. Mapping between input parameters and CSV data. This parameter is mandatory only when mapping_type is set to csv. The mapping rule is similar to the definition of the input parameters in the config.json file. You only need to configure the index parameters under each parameter of the string, number, integer, or boolean type, and specify the value of this parameter to the values of the index parameters in the CSV file to send an inference request. Use commas (,) to separate multiple pieces of CSV data. The values of the index parameters start from 0. If the value of the index parameter is -1, ignore this parameter. For details, see the sample of creating a batch service. |
src_type |
No |
String |
Mandatory for batch services. Data source type, which can be ManifestFile. By default, this parameter is left blank, indicating that only files in the src_path directory are read. If this parameter is set to ManifestFile, src_path must be set to a specific manifest path. Multiple data paths can be specified in the manifest file. For details, see the manifest inference specifications. |
dest_path |
No |
String |
Mandatory for batch services. OBS path to the output data of a batch job |
instance_count |
Yes |
Integer |
Common parameter. Number of instances deployed in a model. The maximum number of instances is 128. To use more instances, submit a service ticket. |
additional_properties |
No |
Map<String,ModelAdditionalProperties> |
Additional attributes for model deployment, facilitating service instance management |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
gpu_p4 |
No |
Float |
Number of GPUs, which can be a decimal. The value cannot be smaller than 0, with the third decimal place is rounded off. This parameter is optional and is not used by default. |
memory |
Yes |
Integer |
Memory in MB, which must be an integer |
cpu |
Yes |
Float |
Number of CPU cores, which can be a decimal. The value cannot be smaller than 0.01, with the third decimal place is rounded off. |
ascend_a310 |
No |
Integer |
Number of Ascend chips. This parameter is optional and is not used by default. Either this parameter or gpu is configured. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
log_volume |
No |
Array of log_volume objects |
Host directory mounting. This parameter takes effect only if a dedicated resource pool is used. If a public resource pool is used to deploy services, this parameter cannot be configured. Otherwise, an error will occur. |
max_surge |
No |
Float |
The value must be greater than 0. If this parameter is not set, the default value 1 is used. If the value is less than 1, it indicates the percentage of instances to be added during the rolling upgrade. If the value is greater than 1, it indicates the maximum number of instances to be added during the rolling upgrade. |
max_unavailable |
No |
Float |
The value must be greater than 0. If this parameter is not set, the default value 0 is used. If the value is less than 1, it indicates the percentage of instances that can be scaled in during the rolling upgrade. If the value is greater than 1, it indicates the number of instances that can be scaled in during the rolling upgrade. |
termination_grace_period_seconds |
No |
Integer |
Graceful stop period of a container. |
persistent_volumes |
No |
Array of persistent_volumes objects |
Persistent storage mounting. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
host_path |
Yes |
String |
Log path to be mapped on the host |
mount_path |
Yes |
String |
Path to the logs in the container |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
name |
No |
String |
Volume name. |
mount_path |
Yes |
String |
Mount path of a volume in the container. |
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
smn_notification |
Yes |
Map<String,SmnNotification> |
SMN message notification structure, which is used to notify the user of the service status change |
log_report_channels |
No |
Array of LogReportPipeline objects |
Log channel group. If this parameter is not specified or the array length is 0, LTS log interconnection is disabled. This function cannot be modified after being enabled. |
websocket_upgrade |
No |
Boolean |
Whether the service interface is upgraded to WebSocket. During service deployment, the default value is false. During service configuration update, the default value is the value set last time.
|
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
topic_urn |
Yes |
String |
URN of an SMN topic |
events |
Yes |
Array of integers |
Event ID. The options are as follows:
|
Parameter |
Mandatory |
Type |
Description |
---|---|---|---|
type |
Yes |
String |
Log pipeline type. Currently, only LTS is supported. |
configuration |
No |
LtsConfiguration object |
LTS log configuration. |
Response Parameters
Status code: 200
Parameter |
Type |
Description |
---|---|---|
service_id |
String |
Service ID |
resource_ids |
Array of strings |
Resource ID array for the resource IDs generated by the target model |
Example Requests
-
Sample request of creating a real-time service
POST https://{endpoint}/v1/{project_id}/services { "infer_type" : "real-time", "service_name" : "mnist", "description" : "mnist service", "config" : [ { "specification" : "modelarts.vm.cpu.2u", "weight" : 100, "model_id" : "0e07b41b-173e-42db-8c16-8e1b44cc0d44", "instance_count" : 1 } ] }
-
Create a real-time service and configure multi-version traffic distribution.
POST https://{endpoint}/v1/{project_id}/services { "service_name" : "mnist", "description" : "mnist service", "infer_type" : "real-time", "config" : [ { "model_id" : "xxxmodel-idxxx", "weight" : "70", "specification" : "modelarts.vm.cpu.2u", "instance_count" : 1, "envs" : { "model_name" : "mxnet-model-1", "load_epoch" : "0" } }, { "model_id" : "xxxxxx", "weight" : "30", "specification" : "modelarts.vm.cpu.2u", "instance_count" : 1 } ] }
-
Create a real-time service in a dedicated resource pool with custom specifications.
POST https://{endpoint}/v1/{project_id}/services { "service_name" : "realtime-demo", "description" : "", "infer_type" : "real-time", "cluster_id" : "8abf68a969c3cb3a0169c4acb24b0000", "config" : [ { "model_id" : "eb6a4a8c-5713-4a27-b8ed-c7e694499af5", "weight" : "100", "cluster_id" : "8abf68a969c3cb3a0169c4acb24b0000", "specification" : "custom", "custom_spec" : { "cpu" : 1.5, "memory" : 7500 }, "instance_count" : 1 } ] }
-
Create a real-time service and configure it to automatically stop.
POST https://{endpoint}/v1/{project_id}/services { "service_name" : "service-demo", "description" : "demo", "infer_type" : "real-time", "config" : [ { "model_id" : "xxxmodel-idxxx", "weight" : "100", "specification" : "modelarts.vm.cpu.2u", "instance_count" : 1 } ], "schedule" : [ { "type" : "stop", "time_unit" : "HOURS", "duration" : 1 } ] }
-
Create a batch service and set mapping_type to file.
POST https://{endpoint}/v1/{project_id}/services { "service_name" : "batchservicetest", "description" : "", "infer_type" : "batch", "cluster_id" : "8abf68a969c3cb3a0169c4acb24b****", "config" : [ { "model_id" : "598b913a-af3e-41ba-a1b5-bf065320f1e2", "specification" : "modelarts.vm.cpu.2u", "instance_count" : 1, "src_path" : "https://infers-data.obs.xxxxx.com/xgboosterdata/", "dest_path" : "https://infers-data.obs.xxxxx.com/output/", "req_uri" : "/", "mapping_type" : "file" } ] }
-
Create a batch service and set mapping_type to csv.
POST https://{endpoint}/v1/{project_id}/services { "service_name" : "batchservicetest", "description" : "", "infer_type" : "batch", "config" : [ { "model_id" : "598b913a-af3e-41ba-a1b5-bf065320f1e2", "specification" : "modelarts.vm.cpu.2u", "instance_count" : 1, "src_path" : "https://infers-data.obs.xxxxx.com/xgboosterdata/", "dest_path" : "https://infers-data.obs.xxxxx.com/output/", "req_uri" : "/", "mapping_type" : "csv", "mapping_rule" : { "type" : "object", "properties" : { "data" : { "type" : "object", "properties" : { "req_data" : { "type" : "array", "items" : [ { "type" : "object", "properties" : { "input5" : { "type" : "number", "index" : 0 }, "input4" : { "type" : "number", "index" : 1 }, "input3" : { "type" : "number", "index" : 2 }, "input2" : { "type" : "number", "index" : 3 }, "input1" : { "type" : "number", "index" : 4 } } } ] } } } } } } ] }
-
Create an edge service.
POST https://{endpoint}/v1/{project_id}/services { "service_name" : "service-edge-demo", "description" : "", "infer_type" : "edge", "config" : [ { "model_id" : "eb6a4a8c-5713-4a27-b8ed-c7e694499af5", "specification" : "custom", "instance_count" : 1, "custom_spec" : { "cpu" : 1.5, "memory" : 7500 }, "envs" : { }, "nodes" : [ "2r8c4fb9-t497-40u3-89yf-skui77db0472" ] } ] }
Example Responses
Status code: 200
Service deployed
{ "service_id" : "10eb0091-887f-4839-9929-cbc884f1e20e", "resource_ids" : [ "INF-f878991839647358@1598319442708" ] }
Status Codes
Status Code |
Description |
---|---|
200 |
Service deployed |
Error Codes
See Error Codes.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot