Importing a Meta Model from OBS
In scenarios where frequently-used frameworks are used for model development and training, you can import the model to ModelArts and use it to create an AI application for unified management.
Constraints
- The imported model for creating an AI application, inference code, and configuration file must comply with the requirements of ModelArts. For details, see Introduction to Model Package Specifications, Specifications for Editing a Model Configuration File, and Specifications for Writing Model Inference Code.
- If the meta model is from a container image, ensure the size of the meta model complies with Restrictions on the Size of an Image for Importing an AI Application.
Prerequisites
- The model has been developed and trained, and the type and version of the AI engine used by the model are supported by ModelArts. For details, see Supported AI Engines for ModelArts Inference.
- The trained model package, inference code, and configuration file have been uploaded to OBS.
- The OBS directory you use and ModelArts are in the same region.
Creating an AI Application
- Log in to the ModelArts management console, and choose AI Application Management > AI Applications in the left navigation pane. The AI Applications page is displayed.
- Click Create in the upper left corner.
- On the displayed page, set the parameters.
- Set basic information about the AI application. For details about the parameters, see Table 1.
Table 1 Parameters of basic AI application information Parameter
Description
Name
Application name. The value can contain 1 to 64 visible characters. Only letters, digits, hyphens (-), and underscores (_) are allowed.
Version
Version of the AI application to be created. For the first import, the default value is 0.0.1.
NOTE:After an AI application is created, you can create new versions using different meta models for optimization.
Description
Brief description of an AI application
- Select the meta model source and set related parameters. Set Meta Model Source to OBS. For details about the parameters, see Table 2.
For the meta model imported from OBS, edit the inference code and configuration files by following model package specifications and place the inference code and configuration files in the model folder storing the meta model. If the selected directory does not comply with the model package specifications, the AI application cannot be created.
Table 2 Parameters of the meta model source Parameter
Description
Meta Model
OBS path for storing the meta model.
The OBS path cannot contain spaces. Otherwise, the AI application fails to be created.
AI Engine
The AI engine automatically associates with the meta model storage path you select.
If you set AI Engine to Custom, set the following parameters:
- Container API: Protocol and port number for starting a model. The request protocol is HTTPS, and the port number is 8080.
- Health Check: checks health status of a model. This parameter is configurable only when the health check API is configured in the custom image. Otherwise, the AI application deployment will fail.
- Check Mode: Select HTTP request or Command.
- Health Check URL: This parameter is displayed when Check Mode is set to HTTP request. Enter the health check URL. The default value is /health.
- Health Check Command: This parameter is displayed when Check Mode is set to Command. Enter the health check command.
- Health Check Period: Enter an integer ranging from 1 to 2147483647. The unit is second.
- Delay( seconds ): specifies the delay for performing the health check after the instance is started. Enter an integer ranging from 0 to 2147483647.
- Maximum Failures: Enter an integer ranging from 1 to 2147483647. During service startup, if the number of consecutive health check failures reaches the specified value, the service will be abnormal. During service running, if the number of consecutive health check failures reaches the specified value, the service will enter the alarm status.
NOTE:To use a custom engine to create an AI application, ensure that the custom engine complies with the specifications for custom engines. For details, see Creating an AI Application Using a Custom Engine.
If health check is configured for an AI application, the deployed services using this AI application will stop 3 minutes after receiving the stop instruction.
AI Application Description
Provide AI application descriptions to help other AI application developers better understand and use your applications. Click Add AI Application Description and set the Document name and URL. You can add up to three AI application descriptions.
Configuration File
By default, the system associates the configuration file stored in OBS. After enabling this function, you can view and edit the model configuration file.
NOTE:This function is to be taken offline. After that, you can modify the model configuration by setting AI Engine, Runtime Dependency, and Apis.
Deployment Type
Select the service types that the application can be deployed. When deploying a service, only the service types selected here are available. For example, if you only select Real-time services here, you can only deploy the AI application as a real-time service after it is created.
API Configuration
After enabling this function, you can edit RESTful APIs to define the input and output formats of an AI application. The model APIs must comply with ModelArts specifications. For details, see Specifications for Editing a Model Configuration File. For details about the code example, see Code Example of apis Parameters.
- Check the information and click Create now. The AI application is created.
In the AI application list, you can view the created AI application and its version. When the status changes to Normal, the AI application is successfully created. On this page, you can perform such operations as creating new versions and quickly deploying services.
- Set basic information about the AI application. For details about the parameters, see Table 1.
Follow-Up Procedure
Deploying an AI Application as a Service: In the AI application list, click the option button on the left of the AI application name to display the version list at the bottom of the list page. Locate the row that contains the target version, click Deploy in the Operation column to deploy the AI application as a deployment type selected during AI application creation.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.