Obtaining Service Monitoring Information
You can use the API to obtain the monitoring information about a service.
Sample Code
In ModelArts notebook, you do not need to enter authentication parameters for session authentication. For details about session authentication of other development environments, see Session Authentication.
- Method 1: Obtain the monitoring information of a service based on the service object created in Deploying a Real-Time Service.
1 2 3 4 5 6 7
from modelarts.session import Session from modelarts.model import Predictor session = Session() predictor_instance = Predictor(session, service_id="your_service_id") predictor_monitor = predictor_instance.get_service_monitor() print(predictor_monitor)
- Method 2: Obtain the monitoring information of a service based on the service object returned in Obtaining Service Objects.
1 2 3 4 5 6 7 8
from modelarts.session import Session from modelarts.model import Predictor session = Session() predictor_object_list = Predictor.get_service_object_list(session) predictor_instance = predictor_object_list[0] predictor_monitor = predictor_instance.get_service_monitor() print(predictor_monitor)
Parameters
|
Parameter |
Type |
Description |
|---|---|---|
|
service_id |
String |
Service ID |
|
service_name |
String |
Service name |
|
monitors |
monitor array corresponding to infer_type of a service |
Monitoring details |
|
Parameter |
Type |
Description |
|---|---|---|
|
model_id |
String |
Model ID |
|
model_name |
String |
Model name |
|
model_version |
String |
Model version |
|
invocation_times |
Number |
Total number of model instance calls |
|
failed_times |
Number |
Number of failed model instance calls |
|
cpu_core_usage |
Float |
Number of used CPUs |
|
cpu_core_total |
Float |
Total number of CPUs |
|
cpu_memory_usage |
Integer |
Used memory, in MB |
|
cpu_memory_total |
Integer |
Total memory, in MB |
|
gpu_usage |
Float |
Number of used GPUs |
|
gpu_total |
Float |
Total number of GPUs |
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.