Updated on 2024-10-29 GMT+08:00

Overview

ModelArts provides the following capabilities:

  • Extensive built-in images, meeting your requirements
  • Custom development environments set up using built-in images
  • Extensive tutorials, helping you quickly understand distributed training
  • Distributed training debugging in development tools such as PyCharm, VS Code, and JupyterLab

Constraints

  • If the instance flavors are changed, you can only perform single-node debugging. You cannot perform distributed debugging or submit remote training jobs.
  • Only the PyTorch and MindSpore AI frameworks can be used for multi-node distributed debugging. If you want to use MindSpore, each node must be equipped with eight cards.
  • The OBS paths in the debugging code should be replaced with your OBS paths.
  • PyTorch is used to write debugging code in this document. The process is the same for different AI frameworks. You only need to modify some parameters.

Advantages and Disadvantages of Single-Node Multi-Card Training Using DataParallel

  • Straightforward coding: Only one line of code needs to be modified.
  • Bottlenecks in communication: The master GPU is used to update and distribute parameter settings, which causes high communication costs.
  • Unbalanced GPU loading: The master GPU is used to summarize outputs, calculate loss, and update weights. Therefore, the GPU memory and usage are higher than those of other GPUs.

Advantages of Multi-Node Multi-Card Training Using DistributedDataParallel

  • Fast communication
  • Balanced load
  • Fast running speed

Related Chapters