Help Center> ModelArts> User Guide (Senior AI Engineers)> Custom Images> For Importing Models> Example: Importing a Model Using a Custom Image

Example: Importing a Model Using a Custom Image

For AI engines that are not supported by ModelArts, you can import the models you compile to ModelArts from custom images. This section describes how to use a custom image to import a model.

Building an Image Locally

A Linux x86_x64 host is used here. You can purchase an ECS of the same specifications or use an existing local host to create a custom image.

  1. Install Docker. For details, see Docker Documentation. You can install Docker as follows:
    curl -fsSL get.docker.com -o get-docker.sh
    sh get-docker.sh
  2. Obtain basic images. Ubuntu18.04 is used in this example.
    docker pull ubuntu:18.04
  3. Create the self-define-images folder, and compile the Dockerfile file and the test_app.py application service code for the custom image in the folder. In this sample code, the application service code uses the Flask framework.
    The file structure is as follows:
    self-define-images/
        --Dockerfile
        --test_app.py
    • Dockerfile
      From ubuntu:18.04
      # Change the apt source to the HUAWEI CLOUD source and install Python, Python3-PIP, and Flask.
      RUN cp -a /etc/apt/sources.list /etc/apt/sources.list.bak && \
        sed -i "s@http://.*security.ubuntu.com@http://repo.huaweicloud.com@g" /etc/apt/sources.list && \
        sed -i "s@http://.*archive.ubuntu.com@http://repo.huaweicloud.com@g" /etc/apt/sources.list && \
        apt-get update && \
        apt-get install -y python3 python3-pip && \
        pip3 install  --trusted-host https://repo.huaweicloud.com -i https://repo.huaweicloud.com/repository/pypi/simple  Flask
      
      # Copy the application service code to the image.
      COPY test_app.py /opt/test_app.py
      
      # Specify the boot command of the image.
      CMD python3  /opt/test_app.py
    • test_app.py
      from flask import Flask, request
      import json 
      app = Flask(__name__)
      
      @app.route('/greet', methods=['POST'])
      def say_hello_func():
          print("----------- in hello func ----------")
          data = json.loads(request.get_data(as_text=True))
          print(data)
          username = data['name']
          rsp_msg = 'Hello, {}!'.format(username)
          return json.dumps({"response":rsp_msg}, indent=4)
      
      @app.route('/goodbye', methods=['GET'])
      def say_goodbye_func():
          print("----------- in goodbye func ----------")
          return '\nGoodbye!\n'
      
      
      @app.route('/', methods=['POST'])
      def default_func():
          print("----------- in default func ----------")
          data = json.loads(request.get_data(as_text=True))
          return '\n called default func !\n {} \n'.format(str(data))
      
      # host must be "0.0.0.0", port must be 8080
      if __name__ == '__main__':
          app.run(host="0.0.0.0", port=8080)

      ModelArts forwards requests to port 8080 of the service started from the custom image. Therefore, the service listening port in the container must be port 8080. See the test_app.py file.

  4. Go to the self-define-images folder and run the following command to create custom image test:v1:
    docker build -t test:v1 .
  5. You can run docker image to view the custom image you have created.

Verifying the Image on the Local Host and Uploading the Image to SWR

  1. Run the following command in the local environment to start the custom image:
    docker run -it -p 8080:8080 test:v1
    Figure 1 Starting a custom image
  2. Open another terminal and run the following commands to verify the functions of the three APIs of the custom image:
    curl -X POST -H "Content-Type: application/json" --data '{"name":"Tom"}'  127.0.0.1:8080/
    curl -X POST -H "Content-Type: application/json" --data '{"name":"Tom"}' 127.0.0.1:8080/greet
    curl -X GET 127.0.0.1:8080/goodbye

    If information similar to the following is displayed, the the function verification is successful.

    Figure 2 API function verification
  1. Upload the custom image to SWR. For details, see Software Repository for Container User Guide.
  2. After the custom image is uploaded, you can view the uploaded image on the My Images > Private Images page of the SWR console.
    Figure 3 List of uploaded images

Importing a Model from the Container Image

For details, see Importing a Meta Model from a Container Image. Note the following parameters:
  • Meta Model Source: Select Container image.
  • Container Image Path: Select the created private image.
    Figure 4 Selecting the created private image
  • Configuration File: Select Edit Online. For details about the configuration file requirements, see Specifications for Compiling the Model Configuration File. Click Save.

    The configuration file is as follows:

    {
        "model_algorithm": "test_001",
        "model_type": "Image",
        "apis": [{
            "protocol": "http",
            "url": "/",
            "method": "post",
            "request": {
                "Content-type": "application/json"
            },
            "response": {
                "Content-type": "application/json"
            }
        },
    {
            "protocol": "http",
            "url": "/greet",
            "method": "post",
            "request": {
                "Content-type": "application/json"
            },
            "response": {
                "Content-type": "application/json"
            }
        },
    {
            "protocol": "http",
            "url": "/goodbye",
            "method": "get",
            "request": {
                "Content-type": "application/json"
            },
            "response": {
                "Content-type": "application/json"
            }
        }
    ]
    }

Deploying the Model as a Real-Time Service

  1. Deploy the model as a real-time service. For details, see Deploying a Model as a Real-Time Service.
  2. View the details about the real-time service.
    Figure 5 Usage Guides
  3. Access the real-time service on the Predictions tab page.
    Figure 6 Accessing a real-time service