Help Center> > Service Overview >What Is ModelArts?

What Is ModelArts?

ModelArts is a one-stop development platform for AI developers. With data preprocessing, semi-automated data labeling, distributed training, automated model building, and on-demand model deployment on the device, edge, and cloud, ModelArts helps AI developers build models quickly and manage the AI development lifecycle.

The one-stop ModelArts platform covers all stages of AI development, including data processing, algorithm development, and model training and deployment. The underlying technologies of ModelArts support various heterogeneous computing resources, allowing developers to flexibly select and use resources. In addition, ModelArts supports popular open source AI development frameworks such as TensorFlow and MXNet. However, developers can use self-developed algorithm frameworks to match your usage habits.

ModelArts aims to simplify AI development.

ModelArts is suitable for AI developers of with varying levels of development experience. Service developers can use ExeML to quickly build AI applications without coding. Beginners can directly use built-in algorithms to build AI applications. AI engineers can use multiple development environments to compile code for quick modeling and application development.

Product Architecture

ModelArts is a one-stop AI development platform that supports the entire development process, including data processing, model training, management, and deployment, and provides AI market for sharing models.

ModelArts supports various AI application scenarios, such as image classification, object detection, video analysis, speech recognition, product recommendations, and exception detection.

Figure 1 ModelArts architecture

Product Advantages

  • One-stop

    The out-of-the-box and full-lifecycle AI development platform provides one-stop data processing, model development, training, management, and deployment.

  • Easy to use
    • Multiple built-in models provided and free use of open source models
    • Automatic optimization of hyperparameters
    • Code-free development and simplified operations
    • One-click deployment of models to the cloud, edge, and devices
  • High performance
    • The self-developed MoXing deep learning framework accelerates algorithm development and training.
    • Optimized GPU utilization accelerates real-time inference.
    • Models running on Ascend AI chips achieve more efficient inference.
  • Flexible
    • Popular open source frameworks available, such as TensorFlow and Spark_MLlib
    • Popular GPUs and self-developed Ascend chips available
    • Exclusive use of dedicated resources
    • Custom images for custom frameworks and operators

Using ModelArts for the First Time

If you are a first-time user, get familiar with the following information:

  • Basic concepts

    Basic Knowledge describes the basic concepts of ModelArts, including the basic process and concepts of AI development, and specific concepts and functions of ModelArts.

  • Getting started

    The Getting Started document provides detailed operation guides to walk you through model building on ModelArts.

  • Best practices

    ModelArts supports multiple open source engines and provides extensive use cases based on the engines and functions. You can build and deploy models by referring to Best Practices.

  • Other functions and operation guides
    • If you are a service developer, you can use ExeML to quickly build models without coding. For details, see User Guide (ExeML).
    • If you are a beginner, you can use common AI algorithms to quickly build models without coding. ModelArts provides built-in algorithms based on the common AI engines. For details, see User Guide (Senior AI Engineers).
    • If you are an AI engineer, you can manage the AI development lifecycle, including data management, and model development, training, management, and deployment. For details, see User Guide (Senior AI Engineers).
    • If you are a developer and want to use ModelArts APIs or SDKs for AI development, refer to API Reference or SDK Reference.