Common Concepts of ModelArts
Inference
Inference is the process of deriving a new judgment from a known judgment according to a certain strategy. In AI, machines simulate human intelligence, and complete inference based on neural networks.
Real-Time Inference
Real-time inference specifies a web service that provides an inference result for each inference request.
Batch Inference
Batch inference specifies a batch job that processes batch data for inference.
Resource Pool
ModelArts provides large-scale computing clusters for model development, training, and deployment. Both public resource pool and dedicated resource pool are available for you to select. ModelArts provides public resource pools by default. Dedicated resource pools are created separately and used exclusively.
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.