Updated on 2024-10-29 GMT+08:00

Training an Object Detection Model

After labeling the images, perform auto training to obtain an appropriate model version.

Procedure

  1. On the ExeML page of the new version, click the name of the target project. Then, click Instance Details on the labeling phase to label data.
    Figure 1 Finding unlabeled data
  2. Return to the labeling phase of the new-version ExeML, click Next and wait until the workflow enters the training phase.
  3. Wait until the training is complete. No manual operation is required. If you close or exit the page, the system continues training until it is complete.
  4. On the object detection phase, wait until the training status changes from Running to Completed.
  5. After the training, click on the object detection phase to view metric information. For details about the evaluation result parameters, see Table 1.
    Table 1 Evaluation result parameters

    Parameter

    Description

    Recall

    Fraction of correctly predicted samples over all samples predicted as a class. It shows the ability of a model to distinguish positive samples.

    Precision

    Fraction of correctly predicted samples over all samples predicted as a class. It shows the ability of a model to distinguish negative samples.

    Accuracy

    Fraction of correctly predicted samples over all samples. It shows the general ability of a model to recognize samples.

    F1 Score

    Harmonic average of the precision and recall of a model. It is used to evaluate the quality of a model. A high F1 score indicates a good model.

An ExeML project supports multiple rounds of training, and each round generates an AI application version. For example, the first training version is 0.0.1, and the next version is 0.0.2. The trained models can be managed by training version. After the trained model meets your requirements, deploy the model as a service.