Testing the Deployed Service
After an AI application is deployed as a real-time service, you can debug code or add files for testing on the Prediction tab page. Based on the input request (JSON text or file) defined by the AI application, the service can be tested in either of the following ways:
- JSON Text Prediction: If the input type of the AI application of the deployed service is JSON text, that is, the input does not contain files, you can enter the JSON code on the Prediction tab page for service testing.
- File Prediction: If the input type of the AI application of the deployed service is file, including images, audios, and videos, you can add images on the Prediction tab page for service testing.
- If the input type is image, the size of a single image must be less than 8 MB.
- The maximum size of the request body for JSON text prediction is 8 MB.
- Due to the limitation of API Gateway, the duration of a single prediction cannot exceed 40s.
- The following image types are supported: png, psd, jpg, jpeg, bmp, gif, webp, psd, svg, and tiff.
- If Ascend flavors are used during service deployment, transparent .png images cannot be predicted because Ascend supports only RGB-3 images.
- This function is used for commissioning. In actual production, you are advised to call APIs. You can select Access Authenticated Using a Token, Access Authenticated Using an AK/SK, or Access Authenticated Using an Application based on the authentication mode.
Input Parameters
After a service is deployed, obtain the input parameters of the service on the Usage Guides tab page of the service details page.
The input parameters displayed on the Usage Guides tab page vary depending on the AI application source that you select.
- If your metamodel comes from ExeML or a built-in algorithm, the input and output parameters are defined by ModelArts. For details, see the Usage Guides tab page. On the Prediction tab page, enter the corresponding JSON text or file for service testing.
- If you use a custom meta model with the inference code and configuration file compiled by yourself (Specifications for Writing the Model Configuration File), ModelArts only visualizes your data on the Usage Guides tab page. The following figure shows the mapping between the input parameters displayed on the Usage Guides tab page and the configuration file.
Figure 2 Mapping between the configuration file and Usage Guides
- If your meta model is imported using a model template, the input and output parameters vary with the template. For details, see the description in Introduction to Model Templates.
JSON Text Prediction
- Log in to the ModelArts management console and choose Service Deployment > Real-Time Services.
- On the Real-Time Services page, click the name of the target service. The service details page is displayed. Enter the inference code on the Prediction tab, and click Predict to perform prediction.
File Prediction
- Log in to the ModelArts management console and choose Service Deployment > Real-Time Services.
- On the Real-Time Services page, click the name of the target service. The service details page is displayed. On the Prediction tab page, click Upload and select a test file. After the file is uploaded successfully, click Predict to perform a prediction test. In Figure 3, the label, position coordinates, and confidence score are displayed.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot