Example: Importing a Model Using a Custom Image
For AI engines that are not supported by ModelArts, you can import the models you compile to ModelArts from custom images. This section describes how to use a custom image to import a model.
Building an Image Locally
A Linux x86_x64 host is used here. You can purchase an ECS of the same specifications or use an existing local host to create a custom image.
- Install Docker. For details, see Docker Documentation. You can install Docker as follows:
curl -fsSL get.docker.com -o get-docker.sh sh get-docker.sh
- Obtain basic images. Ubuntu18.04 is used in this example.
docker pull ubuntu:18.04
- Create the self-define-images folder, and compile the Dockerfile file and the test_app.py application service code for the custom image in the folder. In this sample code, the application service code uses the Flask framework.
The file structure is as follows:
self-define-images/ --Dockerfile --test_app.py- Dockerfile
From ubuntu:18.04 # Change the apt source to the HUAWEI CLOUD source and install Python, Python3-PIP, and Flask. RUN cp -a /etc/apt/sources.list /etc/apt/sources.list.bak && \ sed -i "s@http://.*security.ubuntu.com@http://repo.huaweicloud.com@g" /etc/apt/sources.list && \ sed -i "s@http://.*archive.ubuntu.com@http://repo.huaweicloud.com@g" /etc/apt/sources.list && \ apt-get update && \ apt-get install -y python3 python3-pip && \ pip3 install --trusted-host https://repo.huaweicloud.com -i https://repo.huaweicloud.com/repository/pypi/simple Flask # Copy the application service code to the image. COPY test_app.py /opt/test_app.py # Specify the boot command of the image. CMD python3 /opt/test_app.py
- test_app.py
from flask import Flask, request import json app = Flask(__name__) @app.route('/greet', methods=['POST']) def say_hello_func(): print("----------- in hello func ----------") data = json.loads(request.get_data(as_text=True)) print(data) username = data['name'] rsp_msg = 'Hello, {}!'.format(username) return json.dumps({"response":rsp_msg}, indent=4) @app.route('/goodbye', methods=['GET']) def say_goodbye_func(): print("----------- in goodbye func ----------") return '\nGoodbye!\n' @app.route('/', methods=['POST']) def default_func(): print("----------- in default func ----------") data = json.loads(request.get_data(as_text=True)) return '\n called default func !\n {} \n'.format(str(data)) # host must be "0.0.0.0", port must be 8080 if __name__ == '__main__': app.run(host="0.0.0.0", port=8080)
ModelArts forwards requests to port 8080 of the service started from the custom image. Therefore, the service listening port in the container must be port 8080. See the test_app.py file.
- Dockerfile
- Go to the self-define-images folder and run the following command to create custom image test:v1:
docker build -t test:v1 .
- You can run docker image to view the custom image you have created.
Verifying the Image on the Local Host and Uploading the Image to SWR
- Run the following command in the local environment to start the custom image:
docker run -it -p 8080:8080 test:v1
Figure 1 Starting a custom image
- Open another terminal and run the following commands to verify the functions of the three APIs of the custom image:
curl -X POST -H "Content-Type: application/json" --data '{"name":"Tom"}' 127.0.0.1:8080/ curl -X POST -H "Content-Type: application/json" --data '{"name":"Tom"}' 127.0.0.1:8080/greet curl -X GET 127.0.0.1:8080/goodbyeIf information similar to the following is displayed, the the function verification is successful.
Figure 2 API function verification
- Upload the custom image to SWR. For details, see Software Repository for Container User Guide.
- After the custom image is uploaded, you can view the uploaded image on the My Images > Private Images page of the SWR console.
Figure 3 List of uploaded images
Importing a Model from the Container Image
- Meta Model Source: Select Container image.
- Container Image Path: Select the created private image.
Figure 4 Selecting the created private image
- Configuration File: Select Edit Online. For details about the configuration file requirements, see Specifications for Compiling the Model Configuration File. Click Save.
The configuration file is as follows:
{ "model_algorithm": "test_001", "model_type": "Image", "apis": [{ "protocol": "http", "url": "/", "method": "post", "request": { "Content-type": "application/json" }, "response": { "Content-type": "application/json" } }, { "protocol": "http", "url": "/greet", "method": "post", "request": { "Content-type": "application/json" }, "response": { "Content-type": "application/json" } }, { "protocol": "http", "url": "/goodbye", "method": "get", "request": { "Content-type": "application/json" }, "response": { "Content-type": "application/json" } } ] }
Deploying the Model as a Real-Time Service
- Deploy the model as a real-time service. For details, see Deploying a Model as a Real-Time Service.
- View the details about the real-time service.
Figure 5 Usage Guides
- Access the real-time service on the Predictions tab page.
Figure 6 Accessing a real-time service
Last Article: Importing a Model Using a Custom Image
Next Article: Model Package Specifications
Did this article solve your problem?
Thank you for your score!Your feedback would help us improve the website.