Help Center/ ModelArts/ ModelArts User Guide (Standard)/ Image Management/ Creating a Custom Image for Inference/ Creating a Custom Image in a Notebook Instance Using the Image Saving Function
Updated on 2024-10-29 GMT+08:00

Creating a Custom Image in a Notebook Instance Using the Image Saving Function

Scenario

This section describes how to import a local model package to ModelArts notebook for debugging and saving, and then deploy the saved image for inference.

Procedure:

  1. Step1 Copying a Model Package in a Notebook Instance
  2. Step2 Debugging a Model in a Notebook Instance
  3. Step3 Saving an Image in a Notebook Instance
  4. Step4 Using the Saved Image for Inference Deployment

Step1 Copying a Model Package in a Notebook Instance

  1. Log in to the ModelArts management console. In the navigation pane on the left, choose Development Workspace > Notebook.
  2. Click Create Notebook in the upper right corner. Configure the parameters on the displayed page.
    1. Configure basic information of the notebook instance, including its name, description, and auto stop status.
    2. Select an image and configure resource specifications for the instance.
      • Image: Select the pytorch1.8-cuda10.2-cudnn7-ubuntu18.04 image. For details about the image, see Engine Version 1: pytorch_1.8.0-cuda_10.2-py_3.7-ubuntu_18.04-x86_64.
      • Resource Type: Select a public resource pool or a dedicated resource pool. A public resource pool is used as an example.
      • Type: GPU is recommended.
      • Flavor: GP Tnt004 is recommended.
  3. Click Next. Confirm the information and click Submit.

    Switch to the notebook instance list. The notebook instance is being created. It will take several minutes.

  4. Wait until the notebook status changes to Running. Then, locate the notebook in the list and click Open in the Operation column. The JupyterLab Launcher page is displayed.
    Figure 1 JupyterLab Launcher
  5. Click to upload the model package file to the notebook instance. The default working directory of the instance is /home/ma-user/work/. Prepare the model package file. For details, see Sample Model Package File.
    Figure 2 Uploading a model package
  6. Start the Terminal. Decompress model.zip and delete it.
    # Decompress the ZIP file.
    unzip model.zip
    Figure 3 Decompressing model.zip on the Terminal
  7. In the Terminal tab, run the copy command.
    cp -rf model/*  /home/ma-user/infer/model/1

    Check whether the image file is copied.

    cd /home/ma-user/infer/model/1
    ll
    Figure 4 Image file copied

Sample Model Package File

A model file in the model package file model.zip must be prepared by yourself. The following uses a handwritten digit recognition model as an example.

The inference script file customize_service.py must be available in the model directory for model pre-processing and post-processing.

Figure 5 Model directory of the inference model

For details about the inference script customize_service.py, see Specifications for Writing a Model Inference Code File customize_service.py.

The content of the customize_service.py file used in this case is as follows:

import logging
import threading

import numpy as np
import tensorflow as tf
from PIL import Image

from model_service.tfserving_model_service import TfServingBaseService


class mnist_service(TfServingBaseService):

    def __init__(self, model_name, model_path):
        self.model_name = model_name
        self.model_path = model_path
        self.model = None
        self.predict = None

        # Load the model in saved_model format in non-blocking mode to prevent blocking timeout.
        thread = threading.Thread(target=self.load_model)
        thread.start()

    def load_model(self):
        # Load the model in saved_model format.
        self.model = tf.saved_model.load(self.model_path)

        signature_defs = self.model.signatures.keys()

        signature = []
        # only one signature allowed
        for signature_def in signature_defs:
            signature.append(signature_def)

        if len(signature) == 1:
            model_signature = signature[0]
        else:
            logging.warning("signatures more than one, use serving_default signature from %s", signature)
            model_signature = tf.saved_model.DEFAULT_SERVING_SIGNATURE_DEF_KEY

        self.predict = self.model.signatures[model_signature]

    def _preprocess(self, data):
        images = []
        for k, v in data.items():
            for file_name, file_content in v.items():
                image1 = Image.open(file_content)
                image1 = np.array(image1, dtype=np.float32)
                image1.resize((28, 28, 1))
                images.append(image1)

        images = tf.convert_to_tensor(images, dtype=tf.dtypes.float32)
        preprocessed_data = images

        return preprocessed_data

    def _inference(self, data):

        return self.predict(data)

    def _postprocess(self, data):

        return {
            "result": int(data["output"].numpy()[0].argmax())
        }

Step2 Debugging a Model in a Notebook Instance

  1. In a new Terminal tab, go to the /home/ma-user/infer/ directory, run the run.sh script, and predict the model. In a base image, run.sh is used as the boot script by default. The start command is as follows:
    sh  run.sh
    Figure 6 Running the boot script
  2. Upload an image with a handwritten digit to the notebook instance for prediction.
    Figure 7 Handwritten digit
    Figure 8 Uploading an image for prediction
  3. Open a new Terminal and run the following command for prediction:
    curl -kv -F 'images=@/home/ma-user/work/test.png' -X POST http://127.0.0.1:8080/
    Figure 9 Prediction

    If the model file or inference script file is modified during debugging, restart the run.sh script. To do so, run the following command to stop Nginx and then run the run.sh script:

    # Obtain the Nginx process.
    ps -ef |grep nginx 
    # Stop all Nginx-related processes.
    kill -9 {Process ID}
    # Execute run.sh.
    sh run.sh

    You can also run the pkill nginx command to stop all Nginx processes.

    # Stop all Nginx processes.
    pkill nginx
    # Execute run.sh.
    sh run.sh
    Figure 10 Restarting run.sh

Step3 Saving an Image in a Notebook Instance

A running notebook instance must be available.

  1. Locate the target notebook instance in the list and choose More > Save Image in the Operation column.
  2. In the displayed Save Image dialog box, configure the parameters. Then, click OK.

    Choose an organization from the Organization drop-down list. If no organization is available, click Create on the right to create one.

    Users in an organization can share all images in the organization.

  3. The image will be saved as a snapshot, which will take about 5 minutes. During this period of time, do not perform any operations on the instance. (You can still perform operations on the accessed JupyterLab page and local IDE.)

    The time required for saving an image as a snapshot will be counted in the instance running duration. If the instance running duration expires before the snapshot is saved, saving the image will fail.

  4. After the image is saved, the instance status changes to Running. View the image on the Image Management page.
  5. Click the image name to view its details.

Step4 Using the Saved Image for Inference Deployment

Import the custom image debugged in Step2 Debugging a Model in a Notebook Instance to AI applications and deploy it as a real-time service.

  1. Log in to the ModelArts console. In the navigation pane on the left, choose AI Applications. Click Create Applications on the displayed page.
  2. Configure the parameters for the AI application.
    • Meta Model Source: Select Container image.
    • Container Image Path: Click to select an image file. For details about the path, see the SWR address in 5.
    • Container API: Select HTTPS.
    • host: Set to 8443.
    • Deployment Type: Select Real-Time Services.
  3. Enter the boot command.
    sh /home/ma-user/infer/run.sh
  4. Enable APIs, edit the API, and click Save. Specify a file as the input. The following shows a code example.
    Figure 11 API definition

    The API definition is as follows:

    [{
    	"url": "/",
    	"method": "post",
    	"request": {
    		"Content-type": "multipart/form-data",
    		"data": {
    			"type": "object",
    			"properties": {
    				"images": {
    					"type": "file"
    				}
    			}
    		}
    	},
    	"response": {
    		"Content-type": "applicaton/json",
    		"data": {
    			"type": "object",
    			"properties": {
    				"result": {
    					"type": "integer"
    				}
    			}
    		}
    	}
    }]

    After enabling this function, you can edit RESTful APIs to define the AI application input and output formats.

    • If you edit APIs when creating an AI application, the system will automatically identify the prediction type after the created AI application is deployed.
    • If you do not edit APIs when creating an AI application, you will be required to select a request type for prediction after the created AI application is deployed. The request type can be application/json or multipart/form-data. Select a proper type based on the meta model.
  5. After the APIs are configured, click Create now. Wait until the AI application runs properly.
  6. Locate the created AI application in the list and click Deploy in the Operation column. In the displayed version list, locate the target version and click Real-Time Services in the Operation column.
  7. On the Deploy page, configure the key parameters as follows:

    Name: Enter a custom real-time service name or use the default name.

    Resource Pool: Select a public resource pool.

    AI Application Source and AI Application and Version: The AI application and version will be automatically selected.

    Specifications: Choose CPU: 2 vCPUs 8GB.

    Retain default settings for other parameters.

  8. After configuring the parameters, click Next, confirm parameter settings, and click Submit.
  9. In the navigation pane on the left, choose Service Deployment > Real-Time Services. When the service status changes to Running, the service is deployed. Click Predict in the Operation column. The Prediction page on the service details page is displayed. Upload an image for prediction.