Updated on 2022-03-13 GMT+08:00

Registering an Operator

As the framework manager, Framework provides the REGISTER_CUSTOM_OP macro to register an operator based on the specified operator name.

The code of custom operator registration is as follows:

REGISTER_CUSTOM_OP("test_layer")
    .FrameworkType(CAFFE) 
    .OriginOpType("Test")
    .ParseParamsFn(ParseParamsxx)    
    .InferShapeAndTypeFn(InferShapeAndTypexx)
    .TEBinBuildFn(BuildTeBinxx)
    .ImplyType(ImplyType::TVM)
    .Formats({DOMI_TENSOR_NC1HWC0}, {DOMI_TENSOR_NC1HWC0})
    .WeightFormats({DOMI_TENSOR_FRACTAL_Z, DOMI_TENSOR_NC1HWC0});         

In the preceding information:

  • REGISTER_CUSTOM_OP: Registers a custom operator. Replace test_layer with the operator name in the offline model file. The operator name can be random but must not conflict with existing operator names.
  • FrameworkType: The operator parameter parsing logic varies depending on the framework. Therefore, models under different frameworks require different plug-ins. The plug-in registration code must specify the model framework. Set this parameter to CAFFE.
  • OriginOpType: operator type, which must be the same as the operator type defined in Caffe Prototxt. Otherwise, parsing fails. Find the preset caffe.proto file in /include/inc/custom/proto/caffe/caffe.proto in the DDK installation path.
  • ParseParamsFn: Registers the function for model parsing. ParseParamsxx has been implemented in Parsing an Operator. This step is required for a plug-in developed for the Caffe framework. If you are rewriting a built-in operator of the Ascend AI processor, skip this step. If the custom operator is not supported by the Ascend AI processor, this step is mandatory.
  • InferShapeAndTypeFn: Registers the function for shape and class inference. InferShapeAndTypexx has been implemented in Inferring the Output Tensor Description of an Operator.
  • TEBinBuildFn: Registers the TBE operator building function. BuildTeBinxx has been implemented in Building an Operator.
  • ImplyType: Specifies the operator implementation. ImplyType::TVM indicates that the operator is a TE operator.
  • Formats: Specifies the layout formats of the input data and output data of the operator. The first list is the input data format list, and the second list is the output data format list. If there are multiple inputs, list the layout format of each input data in the first list. For example, if there are two pieces of input data in the NC1HWC0 format, call the Formats function as follows:
    .Formats({DOMI_TENSOR_NC1HWC0, DOMI_TENSOR_NC1HWC0}, {DOMI_TENSOR_NC1HWC0})

    For details, see Formats in Framework API Reference.

  • WeightFormats: Sets the layout format of operator weight data. For details about the supported data formats, see WeightFormats in Framework API Reference. For example, the data layout format of the filter of convolution is fractal_Z, and the data layout format of the filter of bias is NC1HWC0.

    If quantization during model conversion is enabled, the constant formats for Framework processing need to be added to this API. Currently, Framework supports the following quantization operators: Conv, FC, and Depthwise Conv. If quantization is enabled for these operators during model conversion, you need to add six DOMI_TENSOR_NC1HWC0 data formats to the end of the parameter list of the WeightFormats API. (During quantization, Framework adds six constants whose data layout format is NC1HWC0. The following is a code sample of the WeightFormats API for the convolution operator with quantization enabled:

    .WeightFormats({DOMI_TENSOR_FRACTAL_Z, DOMI_TENSOR_NC1HWC0, DOMI_TENSOR_NC1HWC0, DOMI_TENSOR_NC1HWC0, DOMI_TENSOR_NC1HWC0,DOMI_TENSOR_NC1HWC0, DOMI_TENSOR_NC1HWC0, DOMI_TENSOR_NC1HWC0})