Help Center> ModelArts> SDK Reference> Service Management> Testing an Inference Service
Updated on 2024-03-21 GMT+08:00

Testing an Inference Service

A real-time service supports files, images, and JSON data for test. Deploy a real-time service predictor for the inference test.

Sample Code

In ModelArts notebook, you do not need to enter authentication parameters for session authentication. For details about session authentication of other development environments, see Session Authentication.

Scenario: Perform an inference test using the predictor in Deploying a Real-Time Service.
1
2
3
4
5
6
7
from modelarts.session import Session
from modelarts.model import Predictor

session = Session()
predictor_instance = Predictor(session, service_id="your_service_id")
predict_result = predictor_instance.predict(data=data_path, data_type=data_type)
print(predict_result)

Parameters

Table 1 Parameters

Parameter

Mandatory

Type

Description

data_type

Yes

String

The following types are supported: files, images, and JSON.

data

Yes

String

  • For files or images, this parameter indicates the local path, for example:
    data = "/home/ma-user/work/test.jpg"
  • For JSON data, this parameter indicates the local path, for example:
    data = "/home/ma-user/work/test.json"
    It can also indicate a variable of the dict type, for example:
    data = {
      "is_training": "False",
      "observations": [[1,2,3,4]],
      "default_policy/eps:0" : "0.0"
    }

path

No

String

Internal inference path, which defaults to "/"

Table 2 predict response parameters

Parameter

Description

Response body

Output parameters and values. The platform only forwards the output parameters and values, but does not recognize them.