Help Center/ Huawei HiLens/ FAQs/ Skill Development/ How Do I Handle a Model Conversion Failure?
Updated on 2022-08-18 GMT+08:00

How Do I Handle a Model Conversion Failure?

You can view the model conversion logs to locate the failure cause and rectify the fault accordingly.

Viewing Model Conversion Logs

  1. Log in to the Huawei HiLens console. In the navigation pane, choose Skill Development > Models. The Models page is displayed.

    If the model fails to convert, you can view that the model status is Conversion failed.

  2. Locate the failed model and click Details in the Operation column. The Model Details page is displayed.

    You can view the Basic Information and Log Information of the model, as shown in Figure 1.

    You can enter a keyword about the failed model in the search box of the Log Information area to quickly locate the failure cause.

    Figure 1 Model details

Resolving Model Conversion Failures

Solutions for common model conversion failures:

  • Check whether the uploaded model file is correct.

    Before importing a custom model, upload it to OBS. A non-om model package contains Caffe model files .caffemodel and .prototxt and configuration file .cfg, or TensorFlow model file .pb and configuration file .cfg. Prepare the configuration file .cfg based on the model file.

  • Check whether the model to be imported or converted uses the TensorFlow or Caffe operator boundaries supported by .om models.

    Not all models can be successfully converted. Before importing and converting a model, check whether it uses the TensorFlow and Caffe operator boundaries supported by .om models. For details, see Caffe Operator Boundaries and TensorFlow Operator Boundaries.

  • Check whether the parameters are correctly set for model conversion.

    For details about the parameters, see Importing (Converting) Models. The parameters that may be incorrectly set are listed below.

    • Input Tensor Shape

      Mandatory. The shape of the input data, in the format of NHWC, for example, input_name:1,224,224,3. input_name must be a node name in the network model before model conversion, which must be configured when the model has dynamic shape input. For example, in input_name1:?,h,w,c, the question mark (?) indicates the batch size, that is, the number of images processed at a time. It is used to convert the original model with a dynamic shape into an offline model with a fixed shape.

      Use commas to separate multiple inputs.

    • Type

      Select the correct model conversion type for the imported model.

      • TF-FrozenGraph-To-Ascend-HiLens

        This template converts TensorFlow frozen_graph models into those run on Ascend chips. If the firmware version of your HiLens Kit system is 2.2.200.011 or you use HiLens Studio for debugging, you are advised to use this template for model conversion.

      • TF-SavedModel-To-Ascend-HiLens

        This template converts TensorFlow saved_model models into those run on Ascend chips. If the firmware version of your HiLens Kit system is 2.2.200.011 or you use HiLens Studio for debugging, you are advised to use this template for model conversion.

      • TF-FrozenGraph-To-Ascend

        This template converts TensorFlow frozen_graph models into those run on Ascend chips. If the firmware version of your HiLens Kit system is earlier than 2.2.200.011, you are advised to use this template for model conversion.

      • TF-SavedModel-To-Ascend

        This template converts TensorFlow saved_model models into those run on Ascend chips. If the firmware version of your HiLens Kit system is earlier than 2.2.200.011, you are advised to use this template for model conversion.

      • Caffe to Ascend

        Caffe models can be converted into models that can run on Ascend chips.

      • TF-FrozenGraph-To-Ascend-893

        This template converts TensorFlow frozen_graph models into those run on Ascend chips. If the firmware version of your HiLens Kit system is earlier than 2.2.200.011, you are advised to use this template for model conversion.

    • Import From

      For Ascend chip-supported models developed locally or developed in ModelArts and converted on HiLens, set this parameter by referring to Model Input Directory Specifications.

      • The model input directory cannot contain multiple models.
      • The directory must contain the model file. Other files are optional.