Why Is the Precision in the Evaluation Result Inconsistent with That Printed in the Log?
The possible causes are as follows:
- When the model output is obtained, some post-processing operations are omitted. As a result, the evaluation interface obtains incorrect input.
- The sequence of prediction result labels does not map to that of labeling class labels. For example, the label whose number is 0 corresponds to the label whose number is 1 in label_map_dict.
- Different thresholds, such as the IoU threshold and confidence threshold, are used for evaluation.
- The used models are inconsistent.
- The evaluated data is inconsistent.
Solution
Check that the datasets, models, and processing methods used in the training process are consistent with those used in the evaluation process.
Did this article solve your problem?
Thank you for your score!Your feedback would help us improve the website.