Viewing ModelArts Model Details
Viewing the Model List
You can view all created models on the model list page. The model list page displays the following information.
Parameter |
Description |
---|---|
Model Name |
Model name. |
Latest Version |
Latest version of a model. |
Status |
Model status. |
Deployment Type |
Types of the services that a model can be deployed as. |
Versions |
Number of model versions. |
Request Mode |
Request mode of real-time services.
|
Created |
Model creation time. |
Description |
Model description. |
Operation |
|
Click the number in Versions to view the version list.
The version list displays the following information.
Parameter |
Description |
---|---|
Version |
Current version of a model. |
Status |
Model status. |
Deployment Type |
Types of the services that a model can be deployed as. |
Model Size |
Model size. |
Model Source |
Model source. |
Created |
Model creation time. |
Description |
Model description. |
Operation |
|
Viewing Model Details
- Log in to the ModelArts console, and choose Model Management from the navigation pane.
- Click the name of the target model to access its details page.
On the model details page, you can view the basic information and precision of the model, and switch tab pages to view more information.
Table 3 Basic model information Parameter
Description
Name
Model name.
Status
Model status.
Version
Current version of a model.
ID
Model ID.
Description
Click the edit button to add the description of a model.
Deployment Type
Types of the services that a model can be deployed as.
Meta Model Source
Source of the meta model, which can be training jobs, OBS, or container images.
Training Name
Associated training job if the meta model comes from a training job. Click the training job name to go to its details page.
Training Version
Training job version if the meta model comes from an old-version training job.
Storage path of the meta model
Path to the meta model if the meta model comes from OBS.
Container Image Storage Path
Path to the container image if the meta model comes from a container image.
AI Engine
AI engine if the meta model comes from a training job or OBS.
Engine Package Address
Engine package address if the meta model comes from OBS and AI Engine is Custom.
Runtime Environment
Runtime environment on which the meta model depends if the meta model comes from a training job or OBS and a preset AI engine is used.
Container API
Protocol and port number for starting the model if the meta model comes from OBS (AI Engine is Custom) or a container image.
Inference Code
Path to the inference code if the meta model comes from an olde-version training job.
Image Replication
Image replication status for meta models from a container image.
Dynamic loading
Dynamic loading status if the meta model comes from a training job or OBS.
Size
Model size.
Health Check
Displays health check status if the meta model comes from OBS or a container image. When health check is enabled, the probe parameter settings are displayed.
- Startup Probe: This probe checks if the application instance has started. If a startup probe is provided, all other probes are disabled until it succeeds. If the startup probe fails, the instance is restarted. If no startup probe is provided, the default status is Success.
- Readiness Probe: This probe verifies whether the application instance is ready to handle traffic. If the readiness probe fails (meaning the instance is not ready), the instance is taken out of the service load balancing pool. Traffic will not be routed to the instance until the probe succeeds.
- Liveness Probe: This probe monitors the application health status. If the liveness probe fails (indicating the application is unhealthy), the instance is automatically restarted.
The probe parameters include Check Mode, Health Check URL (displayed when Check Mode is set to HTTP request), Health Check Command (displayed when Check Mode is set to Command), Health Check Period, Delay, Timeout, and Maximum Failures.
Model Description
Description document added during the creation of a model.
Instruction Set Architecture
System architecture.
Inference Accelerator
Type of inference accelerator cards.
Table 4 Model details tabs Parameter
Description
Model Precision
Model recall, precision, accuracy, and F1 score of a model.
Parameter Configuration
API configuration, input parameters, and output parameters of a model.
Runtime Dependency
Model dependency on the environment. If creating a job failed, edit the runtime dependency. After the modification is saved, the system will automatically use the original image to create the job again.
Events
The progress of key operations during model creation.
Events are stored for three months and will be automatically cleared then.
For details about how to view events of a model, see Viewing ModelArts Model Events.
Constraint
Displays the constraints of service deployment, such as the request mode, boot command, and model encryption, based on the settings during model creation. For models in asynchronous request mode, parameters including the input mode, output mode, service startup parameters, and job configuration parameters can be displayed.
Associated Services
The list of services that a model was deployed. Click a service name to go to the service details page.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot