Deploying a Model as a Real-Time Service
After creating an AI application, you can deploy it as a real-time service and call the service for prediction.
Constraints
A user can create up to 20 real-time services.
Prerequisites
- An AI application in the Normal state is available in ModelArts.
- The account is not in arrears to ensure available resources for service running.
Procedure
- Log in to the ModelArts console. In the navigation pane on the left, choose Model Deployment > Real-Time Services.
- In the real-time service list, click Deploy in the upper left corner.
- Configure parameters.
- Configure basic parameters. For details, see Table 1.
Table 1 Basic parameters Parameter
Description
Name
Name of a real-time service.
Auto Stop
Time for your service to automatically stop running. This helps you avoid unnecessary billing. If you disable this feature, your real-time service will continue running and you will be billed accordingly. By default, this feature is enabled and set to stop the service 1 hour after it starts.
The options are 1 hour later, 2 hours later, 4 hours later, 6 hours later, and Custom. If you select Custom, you can enter any integer from 1 to 24.
Description
Brief description for a real-time service.
- Enter key information including the resource pool and AI application configurations. For details, see Table 2.
Table 2 Parameters Parameter
Sub-Parameter
Description
Resource Pool
Public Resource Pool
CPU/GPU resource pools are available for you to select. The pricing for resource pools varies depending on their flavors. For details, see Product Pricing Details. Public resource pools only support the pay-per-use billing mode.
Dedicated Resource Pool
Select a dedicated resource pool flavor. The physical pools with logical subpools created are not supported temporarily.
AI Application and Configuration
AI Application Source
Choose My AI Applications or My Subscriptions as needed.
AI Application and Version
Select the AI application and version that are in the Normal status.
Traffic Ratio (%)
Data proportion of the current AI application version. Service calling requests are allocated to the current version based on this proportion.
If you deploy only one version of an AI application, set this parameter to 100%. If you select multiple versions for gray release, ensure that the sum of the traffic ratios of these versions is 100%.
Specifications
Select available flavors based on the list displayed on the console. The flavors in gray cannot be used in the current environment.
If no public resource pool flavors are available, use a dedicated resource pool or contact the administrator to create a public resource pool.
NOTE:When deploying the service with the selected flavor, there will be necessary system consumptions. This means that the actual resources required will be greater than the flavor.
Compute Nodes
Number of instances for the current AI application version. If you set the number of nodes to 1, the standalone computing mode is used. If you set the number of nodes to a value greater than 1, the distributed computing mode is used. Select a computing mode based on your actual needs.
Environment Variable
Set environment variables and inject them to the pod. To ensure data security, do not enter sensitive information, such as plaintext passwords, in environment variables.
Timeout
Timeout of a single model, including both the deployment and startup time. The default value is 20 minutes. The value must range from 3 to 120.
Add AI Application Version and Configuration
If the selected AI application has multiple versions, you can add multiple versions and configure a traffic ratio. You can use gray release to smoothly upgrade the AI application version.
NOTE:Free compute specifications do not support gray release of multiple versions.
Mount Storage
This parameter is displayed when the resource pool is a dedicated resource pool. This feature will mount a storage volume to compute nodes (instances) as a local directory when the service is running. This is a good option to consider when dealing with large input data or models. The storage volume type can be OBS parallel file system or SFS Turbo.
SFS Turbo
- File System Name: Select the target SFS Turbo file system. A cross-region SFS Turbo file system cannot be selected.
- Mount Path: Enter the mount path of the container, for example, /sfs-turbo-mount/. Select a new directory. If you select an existing directory, any existing files within it will be replaced.
NOTE:
- A file system can be mounted only once and to only one path. Each mount path must be unique. A maximum of 8 disks can be mounted to a training job.
- Storage mounting is allowed only for services deployed in a dedicated resource pool which has interconnected with a VPC or associated with SFS Turbo.
- To interconnect a VPC is to interconnect the VPC where SFS Turbo belongs to a dedicated resource pool network. For details, see Interconnecting a VPC with a ModelArts Network .
- You can associate HPC SFS Turbo file systems with dedicated resource pool networks.
- If you need to mount multiple file systems, do not use same or similar paths, for example, /obs-mount/ and /obs-mount/tmp/.
- Once you have chosen SFS Turbo, avoid deleting the interconnected VPC or disassociating SFS Turbo. Otherwise, mounting will not be possible. When you mount the backend OBS storage on the SFS Turbo page, make sure to set the client's umask permission to 777 for normal use.
Traffic Limit
N/A
Maximum number of times a service can be accessed within a second. You can configure this parameter as needed.
WebSocket
N/A
Whether to deploy a real-time service as a WebSocket service. For details about WebSocket real-time services, see Full-Process Development of WebSocket Real-Time Services.
NOTE:- This feature is supported only if the AI application is WebSocket-compliant and comes from a container image.
- After this feature is enabled, Traffic Limit and Data Collection cannot be set.
- This parameter cannot be changed after the service is deployed.
Application Authentication
Application
This feature is disabled by default. To enable this feature, see Accessing a Real-Time Service Through App Authentication for details and configure parameters as required.
Figure 1 Setting AI application information
- (Optional) Configure advanced settings.
Table 3 Advanced settings Parameter
Description
Tags
ModelArts can work with Tag Management Service (TMS). When creating resource-consuming tasks in ModelArts, for example, training jobs, configure tags for these tasks so that ModelArts can use tags to manage resources by group.
For details about how to use tags, see How Does ModelArts Use Tags to Manage Resources by Group?
NOTE:You can select a predefined TMS tag from the tag drop-down list or customize a tag. Predefined tags are available to all service resources that support tags. Custom tags are available only to the service resources of the user who has created the tags.
- Configure basic parameters. For details, see Table 1.
- After confirming the entered information, deploy the service as prompted. Deploying a service generally requires a period of time, which may be several minutes or tens of minutes depending on the amount of your data and resources.
Once a real-time service is deployed, it will start immediately.
You can go to the real-time service list to check if the deployment is complete. Once the service status changes from Deploying to Running, the service is deployed.
Testing Real-Time Service Prediction
After an AI application is deployed as a real-time service, debug code or add files for testing in the Prediction tab. You can test the service in two ways, depending on the input request defined by the AI application – either by using a JSON text or a file.
- JSON Text Prediction: If your AI application uses JSON text as input, you can simply paste the code into the Prediction tab to test the service.
- File Prediction: If your AI application uses files as input, you can add images, audios, or videos into the Prediction tab to test the service.
- The size of an input image must be less than 8 MB.
- The maximum size of a request body for JSON text prediction is 8 MB.
- Due to the limitation of API Gateway, the duration of a single prediction cannot exceed 40s.
- The following image types are supported: png, psd, jpg, jpeg, bmp, gif, webp, psd, svg, and tiff.
- If you use Ascend flavors for service deployment, you cannot predict transparent .png images because Ascend only supports RGB-3 images.
- This feature is used for commissioning. Use API calling for actual production. You can select Accessing a Real-Time Service Through Token-based Authentication, Accessing a Real-Time Service Through AK/SK-based Authentication, or Accessing a Real-Time Service Through App Authentication based on the authentication method.
After a service is deployed, obtain the input parameters of the service in the Usage Guides page of the service details page.
The input parameters in the Usage Guides tab vary depending on the AI application source that you select.
- If your meta model comes from ExeML or a built-in algorithm, the input and output parameters are defined by ModelArts. For details, see the Usage Guides tab. In the Prediction tab, enter the corresponding JSON text or file for service testing.
- If you use a custom meta model and your own inference code and configuration file (see Specifications for Writing the Model Configuration File), the Usage Guides tab will only display your configuration file. The following figure shows the mapping between the input parameters in the Usage Guides tab and the configuration file.
Figure 3 Mapping between the configuration file and Usage Guides
The prediction methods for different input requests are as follows:
- JSON Text Prediction
- Log in to the ModelArts console and choose Model Deployment > Real-Time Services.
- Click the name of the target service to access its details page. Enter the inference code in the Prediction tab, and click Predict to perform prediction.
- File Prediction
- Log in to the ModelArts console and choose Model Deployment > Real-Time Services.
- Click the name of the target service to access its details page. In the Prediction tab, click Upload and select a test file. After the file is uploaded, click Predict to perform a prediction test. In Figure 4, the label, position coordinates, and confidence score are displayed.
Using Cloud Shell to Debug a Real-Time Service Instance Container
You can use Cloud Shell provided by the ModelArts console to log in to the instance container of a running real-time service.
Constraints:
- Cloud Shell can only access a container when the associated real-time service is deployed within a dedicated resource pool
- Cloud Shell can only access a container when the associated real-time service is running.
- Log in to the ModelArts console. In the navigation pane, choose Model Deployment > Real-Time Services.
- On the real-time service list page, click the name or ID of the target service.
- Click the Cloud Shell tab and select the target AI application version and compute node. When the connection status changes to , you have logged in to the instance container.
If the server disconnects due to an error or remains idle for 10 minutes, you can select Reconnect to regain access to the container instance.If you encounter a path display issue when logging in to Cloud Shell, press Enter to resolve the problem.Figure 5 Path display issue
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot