Help Center> ModelArts> Best Practices> Ascend Application Instructions> Using a Built-in Algorithm for Object Detection (Ascend 310)

Using a Built-in Algorithm for Object Detection (Ascend 310)

ModelArts provides the built-in algorithm yolov3_resnet18. After simple parameter configuration, the trained model can be quickly deployed to Ascend 310 to provide high-performance inference.

  1. Preparing Data
  2. Creating a Training Job Using a Built-in Algorithm
  3. Converting a Model
  4. Importing a Model
  5. Deploying a Model as a Real-Time Service (Ascend 310)

Preparing Data

  1. Obtain the sample dataset. The dataset contains the train and test directories. The train directory contains image data and labeling information. Upload the data in the train directory to OBS.

    You are advised to use OBS Browser+ to upload files because there is a large amount of data in the dataset. To ensure the model quality, you are advised to upload all data in the train directory to OBS for model training.

  2. Log in to the ModelArts management console and choose Data Management > Datasets from the left navigation pane.
  3. On the Datasets page, click Create Dataset to create a dataset of the object detection type.
    Set Input Dataset Path to the OBS path for storing data in 1. Set Output Dataset Path to an empty directory. The directory cannot be a subdirectory of the directory configured for Input Dataset Path.
    Figure 1 Creating a dataset
  4. After setting the parameters, click Create in the lower right corner of the page. The Datasets page is displayed.
  5. On the Datasets page, click the name of the created dataset. On the page that is displayed, click Label in the upper right corner. The dataset labeling page is displayed.
  6. Wait until the data is synchronized to ModelArts. The dataset has been labeled. You do not need to label it again, but only to wait for data synchronization.

    If all uploaded data is displayed on the Labeled tab page, the data has been synchronized.

    Figure 2 Data synchronized successfully
  7. Click Back to Dataset Dashboard in the upper left corner to return to the Dashboard tab page. Click Publish.
  8. In the dialog box that is displayed, set Version Name and Format. You can use the default values. In this example, the Training and Validation Ratios parameter is optional and can be left blank. Click OK.
  9. On the Datasets page, wait until the dataset is published. An icon indication the running status is displayed before the name of the dataset being published. After the dataset is published, the Publish button in the Operation column becomes available.
    Wait until the dataset is successfully published. Then, you can proceed to train a model.
    Figure 3 Dataset being published

Creating a Training Job Using a Built-in Algorithm

ModelArts provides the built-in algorithm yolov3_resnet18. Models trained using this algorithm can be deployed to Ascend 310. First, you need to use this algorithm to create a training job and obtain a model.

  1. On the ModelArts management console, choose Training Management > Training Jobs.
  2. On the Training Jobs tab page, click Create. On the Create Training Job page that is displayed, set key parameters as follows:

    Algorithm Source: Select Built-in and then select yolov3_resnet18.

    Data Source: Select Dataset and then select the dataset created in Preparing Data and its version from the drop-down list box.

    Training Output Path: Select an empty OBS directory to store the trained model.

    Running Parameter: Retain the default values. For details about running parameters, see Running Parameters.

    Resource Pool: You are advised to select a GPU-based resource pool to create the training job.

    Figure 4 Creating a training job using a built-in algorithm
  3. After setting the parameters, click Next and create a training job as prompted.
  4. Wait until the training job is completed.

    The running of a training job takes a period of time. The time varies depending on the value of max_epochs and the dataset size. A larger value of max_epochs and a larger dataset will prolong the job running duration.

    Generally, for the sample dataset of this example, if max_epochs is set to 400 and the GPU-based resource is used, the running duration is about 4.5 hours.

    When the status of the training job changes to Successful, the training job is completed. You can click the name of the training job to go to the job details page and learn about the configurations, logs, resource usage, and evaluation result of the training job. You can also obtain the trained model from the OBS directory configured for Training Output Path.

    Figure 5 Training job run successfully

Converting a Model

Models trained using built-in algorithms need to be converted into the OM format supported by the Ascend chips.

  1. On the ModelArts management console, choose Model Management > Compression/Conversion.
  2. On the Compression/Conversion page, click Create Task. On the Create Task page that is displayed, set key parameters as follows:
    • Input Framework: Select TensorFlow.
    • Conversion Input Path: Select the <Output path>/V00X/frozen_gragh directory under the training output path. Set <Output path>/V00X based on the actual situation.
    • Output Framework: Select MindSpore.
    • Conversion Output Path: Select the <Output path>/V00X/om/model directory under the training output path. Set <Output path>/V00X based on the actual situation.
    • Conversion Template: Select the TF-FrozenGraph-To-Ascend-C32 template for the model to be deployed to Ascend 310. For details about other conversion templates and their advanced parameters, see Conversion Templates.

      Advanced Settings: In this example, you can directly use the following parameter values. For details about how to modify the parameters, see Conversion Template.

      images indicates the model input node. 1 indicates the batch size, and 3 indicates the number of channels. images, 1, and 3 are fixed and cannot be modified. 352 and 640 are set based on the input_shape parameter in the training job. Separate the numbers in images:1,352,640,3 by commas (,) and no space is allowed. Retain the default values of other advanced parameters.

      Figure 6 Converting a model
  3. After setting the parameters, click Next. The Compression/Conversion page is displayed.

    When the status of the model conversion task changes to Successful, the model has been converted to the OM format.

    Figure 7 Model converted successfully

Importing a Model

After the model is converted to the OM format, you can import the model from the template.

  1. On the ModelArts management console, choose Model Management > Models.
  2. On the Models page, click Import. On the Import page, set key parameters as follows:
    • Meta Model Source: Select Template.
    • Model Template: In this example, the model needs to be deployed to Ascend 310. Therefore, select ARM-Ascend template. Set Model Directory to the directory configured for Conversion Output Path in Converting a Model.
    • Input and Output Mode: Select Built-in object detection because the model of the yolov3_resnet18 algorithm is of the object detection type.
    • Deployment Type: By default, Real-time services, Batch services, and Edge services are selected. Retain the default setting.
      Figure 8 Setting Meta Model Source to Template
  3. After setting the model import parameters, click Next. The Models page is displayed. Wait for the model import result.

    When the model status changes to Normal, the model is successfully imported.

    Figure 9 Model imported successfully

Deploying a Model as a Real-Time Service (Ascend 310)

After the model is imported, you can deploy it as a real-time service using the Ascend 310 resource.

This section describes how to deploy a model as a real-time service. The procedure for deploying a model as a batch service is similar. For details about how to deploy a batch service, see the ModelArts User Guide.

  1. Choose Model Management > Models > My Models, and click Deploy > Real-time Services in the Operation column.
    Figure 10 Deploying the model
  2. On the Deploy page, set key parameters as follows:

    Resource Pool: Select Public resource pools.

    Model and Configuration: Model and Version is automatically set. Specifications: Select the Ascend 310 chip resource you have created. Currently, ARMCPU: 3 vCPUs | 6 GiB Ascend: 1 x Ascend 310 is available. Retain the default values of other parameters.

    Figure 11 Deploying the model as a real-time service
  3. After setting the parameters, click Next and deploy the real-time service as prompted.

    Go to the Real-Time Services page and wait until the service deployment is completed. When the service status changes to Running, the service is successfully deployed.

    Figure 12 Service in the Running status
  4. After the real-time service is deployed, click the service name to go to the service details page.
    • Access service: Learn about the API usage guide and obtain the API URL. You can use Postman or run the curl command to send a request to access the real-time service.
    • Predict: Click the Prediction tab and upload a test image for prediction.
    For more operations, see the ModelArts User Guide.
    Figure 13 Usage Guides
    Figure 14 Prediction