Updated on 2022-03-13 GMT+08:00

Building an Operator

Function Declaration

The operator building function is declared as follows:

Status BuildTeBinxx(const ge::Operator& op, TEBinInfo& te_bin_info)

In the preceding information:

  • BuildTeBinxx: function name, which is user-defined and must be unique
  • op: target operator model. As the operator data struct of the offline model supported by the Ascend AI processor, it stores operator information. For details about class Operator, see Class Operator in GE API Reference.
  • te_bin_info: path of the operator binary file, operator description file path, and DDK version information For details about the TEBinInfo struct, see TEBinBuildFn in Framework API Reference.

Implementation Procedure

The operator building function is called during model conversion by OMG as follows:

  • Obtain the operator tensor description and operator attributes. During model conversion, the information must be fixed values for operator matching.

    For example, in model conversion, match the reduction operator whose axis is 1 and Dim of the input tensor Shape is 4.

    // Parse the operator attribute operation.
        ge::AttrValue operationAttrValue;
        if ((ge::GRAPH_SUCCESS != op.GetAttr("operation", operationAttrValue)) || (ge::GRAPH_SUCCESS != operationAttrValue.GetValue<AttrValue::STR>(operation)))
        {
               printf("GetOpAttr operation failed!\n");
        }
    
    // Parse the operator attribute axis, and adjust axis to point to the actual output dimension of the Softmax operator at the upper layer of the reduction operator in the MyLeNet network, that is, axis 1.
        ge::AttrValue axisAttrValue;
        if ((ge::GRAPH_SUCCESS != op.GetAttr("axis", axisAttrValue)) || (ge::GRAPH_SUCCESS != axisAttrValue.GetValue<AttrValue::INT>(axis)))
        {
            printf("GetOpAttr axis failed!\n");
        }
        // In the OM model, all shape are supplemented to 4d. In this case, axis needs to be repaired to point to the original 2d.
        if(axis < 0)
            axis -= 2;
    
    // Parse the operator attribute coeff.
        ge::AttrValue coeffAttrValue;
        if ((ge::GRAPH_SUCCESS != op.GetAttr("coeff", coeffAttrValue)) || (ge::GRAPH_SUCCESS != coeffAttrValue.GetValue<AttrValue::FLOAT>(coeff)))
        {
            printf("GetOpAttr coeff failed!\n");
        }
    // Obtain the input tensor description of the operator.
        TensorDesc input_desc      = op.GetInputDesc(0);
    
        // Parse the input shape and check whether Dim of the operator is 4.
        if(input_desc.GetShape().GetDimNum() != 4)
        {
            printf("The shape size is %d, which is not 4!", (int32_t)input_desc.GetShape().GetDimNum());
            return FAILED;
        }
  • Specify the operator implementation file, operator implementation function, and operator name in the kernel.
        FilePath = "project_path/operator/reduction"; // Absolute path of the operator implementation file + name of the .py operator file 
        FuncName = "Reduction"; // Name of the operator implementation function in the operator implementation file
        KernelName = "Reduction"; // kernel_name defined in the operator implementation function of the operator implementation file, that is, the name of the generated binary file
  • Specify the path of the operator description file (*.json) generated during operator compilation. Use the following fixed configuration.
      te_bin_info.json_file_path = "./kernel_meta/" + KernelName + ".json";

    During model conversion, operator information will be obtained from the operator description file in this path.

    When the omg model conversion command is executed, the kernel_meta folder generated after operator building is copied to the path where the omg command is executed based on the operator implementation path configured in the FilePath file. Therefore, the path of the *.json file relative to the path where the omg command is executed is fixed to ./kernel_meta.

  • Call the te::BuildCustomop function to call the Python function in the operator implementation file to build the operator.

    Call the BuildCustomop function as follows:

    te::BuildTeCustomOp(te_bin_info.ddk_version, op.GetName(), FilePath, FuncName,"(i,i,i,i), s, i, s, f, s", input_desc.GetShape().GetDim(0),input_desc.GetShape().GetDim(1),input_desc.GetShape().GetDim(2),input_desc.GetShape().GetDim(3), "float16", axis, operation.c_str(), coeff,KernelName.c_str());
    In the preceding information:
    • te_bin_info.ddk_version: DDK version information (unconfigurable), which will be automatically filled during model conversion
    • op.GetName(): obtaining operator name (unconfigurable),
    • FilePath: relative path of the operator file
    • FuncName: name of the operator implementation function in the operator implementation file
    • (i, i, i, i), s, i, s, f, s: parameter placeholders of the implementation functions in the operator implementation file, where, i indicates the integer type, s indicates the string type, f indicates the single-precision floating point number type, and o indicates the PyObject* type. The placeholders must be consistent with the sequence and types of the succeeding parameters, and must be consistent with the definition of the operator implementation function in the operator implementation file. BuildCustomop calls the operator implementation function based on these parameters and generates the kernel using the TVM mechanism.