Model Evaluation API
Custom inference code can calculate inference results and call the analyse evaluation API based on the rules defined by ModelArts.
API Description (Universal API)
ModelArts provides the analyse API to save the inference result in a specified format. You need to call this API based on the following rules after the inference is complete.
analyse(task_type='',
pred_list=[],
label_list=[],
name_list=[],
custom_metric='',
label_map_dict=''
)
|
Parameter |
Mandatory |
Description |
|---|---|---|
|
task_type |
Yes |
Job type. Available job types are image_classification and image_object_detection. image_classification indicates the image classification type. image_object_detection indicates the object detection type. |
|
pred_list |
Yes |
List of model prediction outputs. |
|
label_list |
Yes |
List of all image labels. |
|
name_list |
Yes |
OBS paths of all images. Use absolute paths. |
|
custom_metric |
No |
Custom metric. |
|
label_map_dict |
No |
Label index and name. If this parameter is not set, the system uses {"0": "0", "1": "1", "2": "2", ...} as the display label by default, for example, {"0": "dog", "1": "cat", "2": "horse"}. |
pred_list, label_list, and name_list must be Python list objects with the same length. The objects in the three lists must be in one-to-one mapping. For example, the first element of pred_list is the prediction result of the first image, the first element of label_list is the label of the first image, and the first element of name_list is the absolute path of the first image.
name_list indicates the path for storing images on OBS. The related metrics are sensitivity analysis and inference result viewing, which must be consistent with pred and labellist. The following are examples:
['obs://test/cat/xxx.jpg', ..., 'obs://test/dog/yyy.jpg']
The following is an example of pred_list in the evaluation code for the model of the image classification type. The elements of pred_list are the one-dimensional NumPy ndarray or one-dimensional Python list. The length is the number of classes. pred_list indicates the confidence score of an image in each class.
[ [0.87, 0.11, 0.02], [0.1, 0.7, 0.2], [0.03, 0.04, 0.93], [0.25, 0.65, 0.1], [0.3, 0.34, 0.36] ]
The following is an example of label_list. The elements of label_list are integers, indicating the label classes of the image.
[0, 1, 2, 1, 2]
In the evaluation code for image semantic segmentation, the elements of pred_list are the inference and classification results of each pixel in the image, and the shape is the same as the image size.
1 2 3 4 5 6 7 |
[
[[0, 2, 1, 1, 0, 1, 2, ...]
[0, 1, 0, 0, 0, 0, 0, ...]
...
[2, 1, 1, 0, 0, 1, 2, ...]],
...
]
|
The elements of label_list are the label class of each pixel in the image. The shape is the same as the image size.
1 2 3 4 5 6 7 |
[
[[1, 2, 0, 1, 0, 1, 2, ...]
[0, 0, 0, 0, 0, 1, 0, ...]
...
[2, 2, 1, 0, 0, 1, 2, ...]],
...
]
|
The following is an example of pred_list in the evaluation code for the model of the object detection type. The Python list contains three elements. The first element is a two-dimensional array or NumPy ndarray object, whose shape is num (number of bounding boxes in an image) x 4(ymin, xmin, ymax, xmax). The second element is a one-dimensional array or NumPy ndarray object, whose length is num (number of bounding boxes in an image). The third element is a one-dimensional array or NumPy ndarray object, whose length is num (number of bounding boxes in an image). pred_list indicates [target bounding box coordinate, target bounding box class, and confidence score of the class corresponding to the target bounding box].
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 |
[
[
[
[142.26546 , 172.09337 , 182.41393 , 206.43747 ],
[149.60696 , 232.63474 , 185.081 , 262.0958 ],
[151.28708 , 305.58755 , 186.05899 , 335.83026 ]
],
[1, 1, 1],
[0.999926 , 0.9999119 , 0.99985504]
],
[
[
[184.18466 , 100.23248 , 231.96555 , 147.65791 ],
[ 43.406055, 252.89429 , 84.62765 , 290.55862 ]
],
[3, 3],
[0.99985814, 0.99972576]
],
...
]
|
The following is an example of elements in label_list. The Python list contains two elements. The first element is a two-dimensional array or NumPy ndarray object, whose shape is num (number of bounding boxes in an image) x 4(ymin, xmin, ymax, xmax). The second element is a one-dimensional array or NumPy ndarray object, whose length is num (number of bounding boxes in an image). label_list indicates [target bounding box coordinate, target bounding box class].
[ [ [ [182., 100., 229., 146.], [ 44., 250., 83., 290.] ], [3, 3] ], [ [ [148., 303., 191., 336.], [149., 231., 189., 262.], [141., 171., 184., 206.], [132., 344., 183., 387.], [144., 399., 189., 430.] ], [1., 1., 1., 2., 4.] ], ... ]
API Description (Custom Evaluation Metrics)
If you want to draw custom evaluation metrics on the GUI, you only need to generate JSON formats based on certain rules and write the formats to the generated JSON files.
The following provides some display modes and corresponding JSON formats. You can combine multiple styles into a complete JSON format. For details, see JSON format consisting of multiple styles.
- Chart style
Figure 1 Example of the chart style
JSON format
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Line chart 'line_chart':{'name':{'x_axis_name':str, 'y_axis_name':str, 'x_axis_range':[x_min, x_min+x_step, x_min+2*x_step, ..., x_max], 'y_axis_range':[y_min, y_min+y_step, y_min+2*y_step, ..., y_max], 'curve':{'label_1':[(x0,y0),(x1,y1),...], 'label_2':[(x0,y0),(x1,y1),...], ...} }, ...} Pie chart: Values or percentages are displayed using the pie chart. 'pie_chart':{'name':{'label':[label_name_1, label_name_2, ...] 'value':[value_1, value_2, ...] }, ...} Column chart: Values are displayed using columns. 'column_chart':{'name':{'x_axis_name':str, 'y_axis_name':str, 'x_axis_range':[x_min, x_min+x_step, x_min+2*x_step, ..., x_max], 'y_axis_range':[y_min, y_min+y_step, y_min+2*y_step, ..., y_max], 'x_value':{[value_1, value_2, ...value_n+1]}, 'y_value':{[value_1, value_2, ...value_n]} }, ...
- Table style
'table':{'name': {'top_left_cell':'cell text', 'row_labels':[name_1, name_2, ..., name_m], 'col_labels':[name_1, name_2, ..., name_n], 'cell_value':[[v11, v12, v13, ...,v1n], [], ...,[vm1, vm2, ...,vmn]] }, ...} - Image style
Figure 2 Example of the image style
JSON format
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
"get_negative_samples_cls":[ { "labels":[ { "name":"Class name", "type":0, "property":{ } } ], "predict_labels":[ { "name":"Class name", "type":0, "property":{ } } ], "score":"0.424", "data_info":"/data/leedsbutterfly/images/0090180.png" } ]
- JSON format consisting of multiple styles
You can combine multiple styles into a complete JSON format. The following example shows the JSON format consisting of a line chart and a table. You need to enter the description and title.
{ 'zh-cn':{ 'op_name_1': { 'title': '<Chinese title>', 'description': '<Chinese description>', 'value':{'key':v1, 'key':v2, 'key':v3, ...}, 'table':{'name': {'top_left_cell':'cell text', 'row_labels':[name_1, name_2, ..., name_m], 'col_labels':[name_1, name_2, ..., name_n] 'cell_value':[[v11, v12, v13, ...,v1n], [], ...,[vm1, vm2, ...,vmn]] }, ...}, 'line_chart':{'name':{'x_axis_name':str, 'y_axis_name':str, 'x_axis_range':[x_min, x_min+x_step, x_min+2*step, ..., x_max], 'y_axis_range':[y_min, y_min+y_step, y_min+2*step, ..., y_max], 'curve':{'label_1':[(x0,y0),(x1,y1),...], 'label_2':[(x0,y0),(x1,y1),...], ...} }, ...} }, }
Last Article: Managing Evaluation Job Versions
Next Article: Sample Code for Model Evaluation
Did this article solve your problem?
Thank you for your score!Your feedback would help us improve the website.