Updated on 2022-03-13 GMT+08:00

Constraints and Parameters

Restrictions

Before model conversion, pay attention to the following restrictions:

  • Only a Caffe or TensorFlow model can be converted. For a Caffe model, the input data must be of the FLOAT type. For a TensorFlow model, the input data must be of INT32, BOOL, UINT8, or FLOAT type.
  • For a Caffe model, op name and op type in the model file (.prototxt) and weight file (.caffemodel) must be the consistent (case sensitive).
  • For a Caffe model, the top names of all layers must be the same except for the layers with the same top and bottom (such as BatchNorm, Scale, and ReLU).
  • For a TensorFlow model, only the FrozenGraphDef format is supported.
  • Inputs with dynamic shapes are not supported, for example, NHWC = [?, ?, ?, 3]. The dimension sizes must be static.
  • The input can be up to 4-dimensional. Operators involving dimension changes (such as reshape and expanddim) cannot output five dimensions.
  • Except the const operator, the input and output of operators at all layers in the model must meet the requirements of dim! = 0.
  • Model conversion does not support models that contain operators used for model training.
  • A UINT8 quantized model cannot be converted.
  • Model operators support only 2D convolution, and do not support 3D convolution.
  • Only the operators listed in Operator List are supported. The defined restrictions on the operators must be met.

Parameter Description

Parameter

Description

Mandatory or Not (Depending on Whether the Value of Mode Is 0 or 3)

Default Value

--mode

Operating mode

  • 0: Generate an offline model supported by the Ascend AI processor
  • 1: Convert the offline model or model file to the JSON format.
  • 3: Perform precheck only to check the validity of the model file.

No

0

--model

Path of the source model file

NOTE:

The path can contain uppercase letters, lowercase letters, digits, and underscores (_). The file name can contain uppercase letters, lowercase letters, digits, underscores (_), and periods (.).

Yes

N/A

--weight

Path of the weight file

This parameter needs to be specified when the source model framework is Caffe.

NOTE:

The path can contain uppercase letters, lowercase letters, digits, and underscores (_). The file name can contain uppercase letters, lowercase letters, digits, underscores (_), and periods (.).

No

N/A

--framework

Framework of the source model

  • 0: Caffe
  • 3: TensorFlow
    NOTE:
    • This parameter is not mandatory when mode is set to 1. If this parameter is not set, the offline model is converted to the JSON format by default (ensure that --om and --framework are consistent).

      --framework=0 --om=/home/username/test/resnet18.prototxt

    • This parameter is mandatory when mode is set to 0 or 3.

Yes

N/A

--output

Path for storing the converted offline model (including the file name), for example, out/caffe_resnet18.

The converted offline model is automatically suffixed with .om.

NOTE:

The path can contain uppercase letters, lowercase letters, digits, and underscores (_). The file name can contain uppercase letters, lowercase letters, digits, underscores (_), and periods (.).

Yes

N/A

--encrypt_mode (reserved)

Encryption mode

  • 0: encrypted
  • -1: not encrypted

No

-1

--encrypt_key (reserved)

Path of the random number file used for encryption

This parameter is mandatory in encryption mode.

NOTE:
  • The path can contain uppercase letters, lowercase letters, digits, and underscores (_). The file name can contain uppercase letters, lowercase letters, digits, underscores (_), and periods (.).
  • During the test, you can run the openssl rand 32 -out ek_key command to generate a random number file. In commercial use, a random number file can be generated by using other tools as required.

No

N/A

--hardware_key (reserved)

Path of the encrypted ISV hardware key file

This parameter is mandatory in encryption mode.

NOTE:

The path can contain uppercase letters, lowercase letters, digits, and underscores (_). The file name can contain uppercase letters, lowercase letters, digits, underscores (_), and periods (.).

No

N/A

--certificate (reserved)

Path of the ISV certificate file used for encryption

This parameter is mandatory in encryption mode.

NOTE:

The path can contain uppercase letters, lowercase letters, digits, and underscores (_). The file name can contain uppercase letters, lowercase letters, digits, underscores (_), and periods (.).

No

N/A

--private_key (reserved)

Path of the ISV private key file used for encryption

This parameter is mandatory in encryption mode.

NOTE:

The path can contain uppercase letters, lowercase letters, digits, and underscores (_). The file name can contain uppercase letters, lowercase letters, digits, underscores (_), and periods (.).

No

N/A

--cal_conf

Path of the quantization configuration file

NOTE:
  • The path can contain uppercase letters, lowercase letters, digits, and underscores (_). The file name can contain uppercase letters, lowercase letters, digits, underscores (_), and periods (.).
  • The following is an example of the quantization configuration file:

    device: USE_CPU

    bin: 150

    type: JSD

    quantize_algo: NON_OFFSET

    inference_with_data_quantized: true

    inference_with_weight_quantized: true

  • For details about the quantization configuration file, see Quantization Configuration.

No

N/A

--check_report

Path of the precheck result file. If this path is not specified, the precheck result is saved in the current path when the model conversion fails or mode is set to 3 (precheck only).

NOTE:

The path can contain uppercase letters, lowercase letters, digits, and underscores (_). The file name can contain uppercase letters, lowercase letters, digits, underscores (_), and periods (.).

No

check_result.json

--h or --help

Help information

No

N/A

--input_format

Input data format, either NCHW or NHWC

  • For TensorFlow, the default value is NHWC. To use the NCHW format, you need to specify NCHW.
  • For Caffe, only NCHW is supported.

No

N/A

--input_fp16_nodes

This parameter is used in conjunction with the --is_output_fp16 parameter.

Name of the FP16 input node to the second network in the event of network cascade

Example: node_name1;node_name2

For example: Two networks net1 and net2 are cascaded. The output of net1 serves as the input to net2. This parameter is used to specify the name of the FP16 input node to the second network in the event of network cascade.

No

N/A

--input_shape

Shape of the input data

Example: input_name1:n1,c1,h1,w1;input_name2:n2,c2,h2,w2

input_name must be the node name in the network model before model conversion.

If the source model is of a dynamic shape, for example, input_name1:? ,h,w,c. This parameter is mandatory.

Replace ? with the actual batch size. It is used to convert the source model with a dynamic shape into a offline model with a fixed shape.

No

N/A

--is_output_fp16

Whether the output data type of the first network is FP16 in the event of network cascade

For example: false,true,false,true

For example: Two networks net1 and net2 are cascaded. The output of net1 serves as the input to net2. This parameter is used to specify the output data type of net1 as FP16.

No

false

--json

Path of the .json file converted from the offline model

NOTE:

The path can contain uppercase letters, lowercase letters, digits, and underscores (_). The file name can contain uppercase letters, lowercase letters, digits, underscores (_), and periods (.).

No

N/A

--om

This parameter is mandatory when mode is set to 1.

Path of the offline model or model file to be converted to the JSON format For example: /home/username/test/out/caffe_resnet18.om or /home/username/test/resnet18.prototxt

NOTE:

The path can contain uppercase letters, lowercase letters, digits, and underscores (_). The file name can contain uppercase letters, lowercase letters, digits, underscores (_), and periods (.).

No

N/A

--op_name_map

Path of the operator mapping configuration file. This parameter must be specified when the DetectionOutput algorithm is used on the network.

For example, the DetectionOutput operator can play different roles in different networks. It can specify the mapping from DetectionOutput (of a Da Vinci model) to the following operators:

  • FSRDetectionOutput (of the Faster R-CNN network)
  • SSDDetectionOutput (of the SSD network)
  • RefinedetDetectionOutput (of the RefineDet network)
NOTE:
  • The path can contain uppercase letters, lowercase letters, digits, and underscores (_). The file name can contain uppercase letters, lowercase letters, digits, underscores (_), and periods (.).
  • The following is an example of the operator mapping configuration file:

    DetectionOutput: SSDDetectionOutput

No

N/A

--out_nodes

Output node

If the output node (operator name) is not specified, the output of the last operator layer serves as the model output by default. To check the parameters of a specific operator layer, specify the operator layer by using this parameter. After the model is converted, you can view the parameter information of the specified operator at the end of the .om model file or the JSON file converted from the .om model file.

Example: node_name1:0;node_name1:1;node_name2:0

node_name must be the node name in the network model before model conversion. The number after the colon indicates the output index. For example, node_name1:0 indicates output 0 of the node named node_name1.

No

N/A

--plugin_path

Paths of the custom operator plug-ins

Example: /home/a1/b1;/home/a2/b2;/home/a3/b3

NOTE:

Separate multiple paths by semicolons (;). However semicolons (;) are not allowed in a path; otherwise, path parsing fails.

No

./plugin

--target

This parameter can only be set to mini.

mini: The eltwise operator in quantization supports dual outputs. The roipooling operator in quantization supports int8 output. The conv operator in quantization supports hybrid precision.

No

mini

--ddk_version

Version of the DDK environment to be matched for running a custom operator

No

N/A

--net_format

Preferred data format for network operators: ND (N ≤ 4) or 5D. This parameter is valid only when the input data of the operator on the network supports both the ND and 5D formats.

  • ND: The operators in the model are converted into the common format NCHW.
  • 5D: The operators in the model are converted into Huawei-developed format 5D.

No

N/A

--insert_op_conf

Configuration file of the preprocessing operator, for example, aipp operator

NOTE:
  • The path can contain uppercase letters, lowercase letters, digits, and underscores (_). The file name can contain uppercase letters, lowercase letters, digits, underscores (_), and periods (.).
  • The following is an example of the configuration file:

    aipp_op {

    aipp_mode: static

    input_format: YUV420SP_U8

    csc_switch: true

    var_reci_chn_0: 0.00392157

    var_reci_chn_1: 0.00392157

    var_reci_chn_2: 0.00392157

    }

  • For details about the AIPP configuration file,

    see AIPP Configuration.

No

N/A

--fp16_high_prec

Whether to generate a high-accuracy FP16 Da Vinci model.

  • 0 (default): Generate a common FP16 Da Vinci model, which shows better inference performance.
  • 1: Generate a high-accuracy FP16 Da Vinci model, which offers higher inference accuracy.

High-precision model generation supports only the following operators: Convolution, Pooling, and FullConnection of Caffe and tf.nn.conv2d and tf.nn.max_pool of TensorFlow.

No

0

--output_type

Network output data type

  • FP32 (default): It is recommended for classification and detection networks.
  • UINT8: It is recommended for image super-resolution networks for better inference performance.

No

FP32

--enable_l2dynamic

L2 dynamic optimization enable. This parameter may affect the inference performance of the network model. If the performance does not meet the requirement, you can disable this switch to verify the impact on the performance.

  • true: L2 dynamic optimization enabled
  • false: L2 dynamic optimization disabled

No

true