Help Center> ModelArts> Model Inference> Inference Specifications> Model Templates> Templates> TensorFlow-based Image Classification Template
Updated on 2022-12-16 GMT+08:00

TensorFlow-based Image Classification Template

Introduction

AI engine: TensorFlow 1.8; Environment: Python 2.7. This template is used to import a TensorFlow-based image classification model saved in SavedModel format. This template uses the built-in image processing mode of ModelArts. For details about the image processing mode, see Built-in Image Processing Mode. Ensure that your model can process images whose key is images, because you need to input an image whose key is images to the model for inference. When using the template to import a model, select the model directory containing the model files.

Template Input

The template input is the TensorFlow-based model package stored on OBS. Ensure that the OBS directory you use and ModelArts are in the same region. For details about model package requirements, see Model Package Example.

Input and Output Mode

The built-in image processing mode cannot be overwritten. You cannot select another input and output mode during model creation.

Model Package Specifications

  • The model package must be stored in the OBS folder named model. Model files and the model inference code file are stored in the model folder.
  • The model inference code file is mandatory. The file name must be customize_service.py. Only one inference code file can exist in the model folder. For details about how to write model inference code, see Specifications for Writing Model Inference Code.
  • The structure of the model package imported using the template is as follows:
    model/
    │
    ├── Model file                 //(Mandatory) The model file format varies according to the engine. For details, see the model package example.
    ├── Custom Python package           //(Optional) User's Python package, which can be directly referenced in model inference code
    ├── customize_service.py  //(Mandatory) Model inference code file. The file name must be customize_service.py. Otherwise, the code is not considered as inference code.

Model Package Example

Structure of the TensorFlow-based model package

When publishing the model, you only need to specify the model directory.

OBS bucket/directory name
|── model    (Mandatory) The folder must be named model and is used to store model-related files.
    ├── <<Custom Python package>>    (Optional) User's Python package, which can be directly referenced in model inference code
    ├── saved_model.pb        (Mandatory) Protocol buffer file, which contains the diagram description of the model
    ├── variables             Mandatory for the main file of the *.pb model. The folder must be named variables and contains the weight deviation of the model.
        ├── variables.index                   Mandatory
        ├── variables.data-00000-of-00001     Mandatory
    ├──customize_service.py   (Mandatory) Model inference code file. The file must be named customize_service.py. Only one inference code file exists. The .py file on which customize_service.py depends can be directly put in the model directory.