Converting an Offline Model
The Ascend 310 chip is capable of accelerating inference under the Caffe and TensorFlow framework models. During model conversion, operator scheduling tuning, weight data rearrangement, quantization compression, and memory usage tuning can be implemented, and model preprocessing can be completed without using devices. After model training is complete, you need to convert the trained model to the model file (.om file) supported by Ascend 310, compile service code, and call APIs provided by the Matrix framework to implement service functions.
The offline model conversion tool is stored in the DDK in the <$DDK_HOME>/uihost/bin/omg directory. The offline model generator (OMG) tool is a CLI tool (parameters can be obtained by using the -h command). It is used to convert models under Caffe and TensorFlow frameworks into .om files supported by Ascend 310. For details about how to use the OMG tool, see "Model Conversion Using OMG" in the Model Conversion Guide.
- Caffe model conversion:
#omg --framework 0 --model <model.prototxt> --weight <model.caffemodel> --output <output name> --insert_op_conf <aipp.cfg>
- TensorFlow model conversion:
#omg --framework 3 --model <model.pb> --input_shape "input_name:1,112,112,3" --output <output_name> --insert_op_conf <aipp.cfg>
Table 1 Parameter description Parameter
Description
framework=0
Indicates a Caffe model.
framework=3
Indicates a TensorFlow model.
model
Specifies a model file.
weight
Specifies a weight file for the Caffe model.
output
Specifies a name of the .om file.
input_shape
Specifies the name and size of the input layer. The default value for TensorFlow models is input_layer_name: n, h, w, c.
insert_op_conf
Specifies an AIPP configuration file.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot