Updated on 2024-08-14 GMT+08:00

Example: Creating a Custom Image for Training (Horovod-PyTorch and GPUs)

This section describes how to create an image and use it for training on ModelArts. The AI engine used in the image is Horovod 0.22.1 + PyTorch 1.8.1, and the resources used for training are GPUs.

This section applies only to training jobs of the new version.

Scenario

In this example, write a Dockerfile to create a custom image on a Linux x86_64 server running Ubuntu 18.04.

Objective: Build and install container images of the following software and use the images and CPUs/GPUs for training on ModelArts.

  • ubuntu-18.04
  • cuda-11.1
  • python-3.7.13
  • mlnx ofed-5.4
  • pytorch-1.8.1
  • horovod-0.22.1

Prerequisites

You have registered a Huawei Cloud account. The account is not in arrears or frozen.

Step 1 Creating an OBS Bucket and Folder

Create a bucket and folders in OBS for storing the sample dataset and training code. Table 1 lists the folders to be created. Replace the bucket name and folder names in the example with actual names.

For details about how to create an OBS bucket and folder, see Creating a Bucket and Creating a Folder.

Ensure that the OBS directory you use and ModelArts are in the same region.

Table 1 Folder to create

Name

Description

obs://test-modelarts/pytorch/demo-code/

Stores the training script.

obs://test-modelarts/pytorch/log/

Stores training log files.

Step 2 Preparing the Training Script and Uploading It to OBS

Obtain training scripts pytorch_synthetic_benchmark.py and run_mpi.sh and upload them to obs://test-modelarts/horovod/demo-code/ in the OBS bucket.

pytorch_synthetic_benchmark.py is as follows:

import argparse
import torch.backends.cudnn as cudnn
import torch.nn.functional as F
import torch.optim as optim
import torch.utils.data.distributed
from torchvision import models
import horovod.torch as hvd
import timeit
import numpy as np

# Benchmark settings
parser = argparse.ArgumentParser(description='PyTorch Synthetic Benchmark',
                                 formatter_class=argparse.ArgumentDefaultsHelpFormatter)
parser.add_argument('--fp16-allreduce', action='store_true', default=False,
                    help='use fp16 compression during allreduce')

parser.add_argument('--model', type=str, default='resnet50',
                    help='model to benchmark')
parser.add_argument('--batch-size', type=int, default=32,
                    help='input batch size')

parser.add_argument('--num-warmup-batches', type=int, default=10,
                    help='number of warm-up batches that don\'t count towards benchmark')
parser.add_argument('--num-batches-per-iter', type=int, default=10,
                    help='number of batches per benchmark iteration')
parser.add_argument('--num-iters', type=int, default=10,
                    help='number of benchmark iterations')

parser.add_argument('--no-cuda', action='store_true', default=False,
                    help='disables CUDA training')

parser.add_argument('--use-adasum', action='store_true', default=False,
                    help='use adasum algorithm to do reduction')

args = parser.parse_args()
args.cuda = not args.no_cuda and torch.cuda.is_available()

hvd.init()

if args.cuda:
    # Horovod: pin GPU to local rank.
    torch.cuda.set_device(hvd.local_rank())

cudnn.benchmark = True

# Set up standard model.
model = getattr(models, args.model)()

# By default, Adasum doesn't need scaling up learning rate.
lr_scaler = hvd.size() if not args.use_adasum else 1

if args.cuda:
    # Move model to GPU.
    model.cuda()
    # If using GPU Adasum allreduce, scale learning rate by local_size.
    if args.use_adasum and hvd.nccl_built():
        lr_scaler = hvd.local_size()

optimizer = optim.SGD(model.parameters(), lr=0.01 * lr_scaler)

# Horovod: (optional) compression algorithm.
compression = hvd.Compression.fp16 if args.fp16_allreduce else hvd.Compression.none

# Horovod: wrap optimizer with DistributedOptimizer.
optimizer = hvd.DistributedOptimizer(optimizer,
                                     named_parameters=model.named_parameters(),
                                     compression=compression,
                                     op=hvd.Adasum if args.use_adasum else hvd.Average)

# Horovod: broadcast parameters & optimizer state.
hvd.broadcast_parameters(model.state_dict(), root_rank=0)
hvd.broadcast_optimizer_state(optimizer, root_rank=0)

# Set up fixed fake data
data = torch.randn(args.batch_size, 3, 224, 224)
target = torch.LongTensor(args.batch_size).random_() % 1000
if args.cuda:
    data, target = data.cuda(), target.cuda()


def benchmark_step():
    optimizer.zero_grad()
    output = model(data)
    loss = F.cross_entropy(output, target)
    loss.backward()
    optimizer.step()


def log(s, nl=True):
    if hvd.rank() != 0:
        return
    print(s, end='\n' if nl else '')


log('Model: %s' % args.model)
log('Batch size: %d' % args.batch_size)
device = 'GPU' if args.cuda else 'CPU'
log('Number of %ss: %d' % (device, hvd.size()))

# Warm-up
log('Running warmup...')
timeit.timeit(benchmark_step, number=args.num_warmup_batches)

# Benchmark
log('Running benchmark...')
img_secs = []
for x in range(args.num_iters):
    time = timeit.timeit(benchmark_step, number=args.num_batches_per_iter)
    img_sec = args.batch_size * args.num_batches_per_iter / time
    log('Iter #%d: %.1f img/sec per %s' % (x, img_sec, device))
    img_secs.append(img_sec)

# Results
img_sec_mean = np.mean(img_secs)
img_sec_conf = 1.96 * np.std(img_secs)
log('Img/sec per %s: %.1f +-%.1f' % (device, img_sec_mean, img_sec_conf))
log('Total img/sec on %d %s(s): %.1f +-%.1f' %
    (hvd.size(), device, hvd.size() * img_sec_mean, hvd.size() * img_sec_conf))

run_mpi.sh is as follows:

#!/bin/bash
MY_HOME=/home/ma-user

MY_SSHD_PORT=${MY_SSHD_PORT:-"36666"}

MY_MPI_BTL_TCP_IF=${MY_MPI_BTL_TCP_IF:-"eth0,bond0"}

MY_TASK_INDEX=${MA_TASK_INDEX:-${VC_TASK_INDEX:-${VK_TASK_INDEX}}}

MY_MPI_SLOTS=${MY_MPI_SLOTS:-"${MA_NUM_GPUS}"}

MY_MPI_TUNE_FILE="${MY_HOME}/env_for_user_process"

if [ -z ${MY_MPI_SLOTS} ]; then
    echo "[run_mpi] MY_MPI_SLOTS is empty, set it be 1"
    MY_MPI_SLOTS="1"
fi

printf "MY_HOME: ${MY_HOME}\nMY_SSHD_PORT: ${MY_SSHD_PORT}\nMY_MPI_BTL_TCP_IF: ${MY_MPI_BTL_TCP_IF}\nMY_TASK_INDEX: ${MY_TASK_INDEX}\nMY_MPI_SLOTS: ${MY_MPI_SLOTS}\n"

env | grep -E '^MA_|SHARED_|^S3_|^PATH|^VC_WORKER_|^SCC|^CRED' | grep -v '=$' > ${MY_MPI_TUNE_FILE}
# add -x to each line
sed -i 's/^/-x /' ${MY_MPI_TUNE_FILE}

sed -i "s|{{MY_SSHD_PORT}}|${MY_SSHD_PORT}|g" ${MY_HOME}/etc/ssh/sshd_config

# start sshd service
bash -c "$(which sshd) -f ${MY_HOME}/etc/ssh/sshd_config"

# confirm the sshd is up
netstat -anp | grep LIS | grep ${MY_SSHD_PORT}

if [ $MY_TASK_INDEX -eq 0 ]; then
    # generate the hostfile of mpi
    for ((i=0; i<$MA_NUM_HOSTS; i++))
    do
        eval hostname=${MA_VJ_NAME}-${MA_TASK_NAME}-${i}.${MA_VJ_NAME}
        echo "[run_mpi] hostname: ${hostname}"

        ip=""
        while [ -z "$ip" ]; do
            ip=$(ping -c 1 ${hostname} | grep "PING" | sed -E 's/PING .* .([0-9.]+). .*/\1/g')
            sleep 1
        done
        echo "[run_mpi] resolved ip: ${ip}"

        # test the sshd is up
        while :
        do
            if [ cat < /dev/null >/dev/tcp/${ip}/${MY_SSHD_PORT} ]; then
                break
            fi
            sleep 1
        done

        echo "[run_mpi] the sshd of ip ${ip} is up"

        echo "${ip} slots=$MY_MPI_SLOTS" >> ${MY_HOME}/hostfile
    done

    printf "[run_mpi] hostfile:\n`cat ${MY_HOME}/hostfile`\n"
fi

RET_CODE=0

if [ $MY_TASK_INDEX -eq 0 ]; then

    echo "[run_mpi] start exec command time: "$(date +"%Y-%m-%d-%H:%M:%S")

    np=$(( ${MA_NUM_HOSTS} * ${MY_MPI_SLOTS} ))

    echo "[run_mpi] command: mpirun -np ${np} -hostfile ${MY_HOME}/hostfile -mca plm_rsh_args \"-p ${MY_SSHD_PORT}\" -tune ${MY_MPI_TUNE_FILE} ... $@"

    # execute mpirun at worker-0
    # mpirun
    mpirun \
        -np ${np} \
        -hostfile ${MY_HOME}/hostfile \
        -mca plm_rsh_args "-p ${MY_SSHD_PORT}" \
        -tune ${MY_MPI_TUNE_FILE} \
        -bind-to none -map-by slot \
        -x NCCL_DEBUG=INFO -x NCCL_SOCKET_IFNAME=${MY_MPI_BTL_TCP_IF} -x NCCL_SOCKET_FAMILY=AF_INET \
        -x HOROVOD_MPI_THREADS_DISABLE=1 \
        -x LD_LIBRARY_PATH \
        -mca pml ob1 -mca btl ^openib -mca plm_rsh_no_tree_spawn true \
        "$@"

    RET_CODE=$?

    if [ $RET_CODE -ne 0 ]; then
        echo "[run_mpi] exec command failed, exited with $RET_CODE"
    else
        echo "[run_mpi] exec command successfully, exited with $RET_CODE"
    fi

    # stop 1...N worker by killing the sleep proc
    sed -i '1d' ${MY_HOME}/hostfile
    if [ `cat ${MY_HOME}/hostfile | wc -l` -ne 0 ]; then
        echo "[run_mpi] stop 1 to (N - 1) worker by killing the sleep proc"

        sed -i 's/${MY_MPI_SLOTS}/1/g' ${MY_HOME}/hostfile
        printf "[run_mpi] hostfile:\n`cat ${MY_HOME}/hostfile`\n"

        mpirun \
        --hostfile ${MY_HOME}/hostfile \
        --mca btl_tcp_if_include ${MY_MPI_BTL_TCP_IF} \
        --mca plm_rsh_args "-p ${MY_SSHD_PORT}" \
        -x PATH -x LD_LIBRARY_PATH \
        pkill sleep \
        > /dev/null 2>&1
    fi

    echo "[run_mpi] exit time: "$(date +"%Y-%m-%d-%H:%M:%S")
else
    echo "[run_mpi] the training log is in worker-0"
    sleep 365d
    echo "[run_mpi] exit time: "$(date +"%Y-%m-%d-%H:%M:%S")
fi

exit $RET_CODE

Step 3 Preparing a Server

Obtain a Linux x86_64 server running Ubuntu 18.04. Either an ECS or your local PC will do.

For details about how to purchase an ECS, see Purchasing and Logging In to a Linux ECS. Set CPU Architecture to x86 and Image to Public image. Ubuntu 18.04 images are recommended.

Step 4 Creating a Custom Image

Create a container image with the following configurations and use the image to create a training job on ModelArts:

  • ubuntu-18.04
  • cuda-11.1
  • python-3.7.13
  • mlnx ofed-5.4
  • pytorch-1.8.1
  • horovod-0.22.1

The following describes how to create a custom image by writing a Dockerfile.

  1. Install Docker.

    The following uses the Linux x86_64 OS as an example to describe how to obtain the Docker installation package. For details about how to install Docker, see official Docker documents. Run the following commands to install Docker:

    curl -fsSL get.docker.com -o get-docker.sh
    sh get-docker.sh

    If the docker images command is executed, Docker has been installed. In this case, skip this step.

  2. Check the Docker engine version. Run the following command:
    docker version | grep -A 1 Engine
    The following information is displayed:
     Engine:
      Version:          18.09.0

    Use the Docker engine of the preceding version or later to create a custom image.

  3. Create a folder named context.
    mkdir -p context
  4. Obtain the pip.conf file. In this example, the pip source provided by Huawei Mirrors is used, which is as follows:
    [global]
    index-url = https://repo.huaweicloud.com/repository/pypi/simple
    trusted-host = repo.huaweicloud.com
    timeout = 120

    To obtain pip.conf, go to Huawei Mirrors at https://mirrors.huaweicloud.com/home and search for pypi.

  5. Download the source Horovod code file.

    Download horovod-0.22.1.tar.gz from https://pypi.org/project/horovod/0.22.1/#files.

  6. Download the torch*.whl files.

    Download the following .whl files from https://download.pytorch.org/whl/torch_stable.html:

    • torch-1.8.1+cu111-cp37-cp37m-linux_x86_64.whl
    • torchaudio-0.8.1-cp37-cp37m-linux_x86_64.whl
    • torchvision-0.9.1+cu111-cp37-cp37m-linux_x86_64.whl

    The URL code of the plus sign (+) is %2B. When searching for files in the preceding websites, replace the plus sign (+) in the file name with %2B, for example, torch-1.8.1%2Bcu111-cp37-cp37m-linux_x86_64.whl.

  7. Download the Miniconda3 installation file.

    Download the Miniconda3 py37 4.12.0 installation file (Python 3.7.13) from https://repo.anaconda.com/miniconda/Miniconda3-py37_4.12.0-Linux-x86_64.sh.

  8. Write the container image Dockerfile.
    Create an empty file named Dockerfile in the context folder and copy the following content to the file:
    # The server on which the container image is created must access the Internet.
    
    # Base container image at https://github.com/NVIDIA/nvidia-docker/wiki/CUDA
    #
    # https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
    # require Docker Engine >= 17.05
    #
    # builder stage
    FROM nvidia/cuda:11.1.1-devel-ubuntu18.04 AS builder
    
    # Install CMake obtained from Huawei Mirrors.
    RUN cp -a /etc/apt/sources.list /etc/apt/sources.list.bak && \
        sed -i "s@http://.*archive.ubuntu.com@http://repo.huaweicloud.com@g" /etc/apt/sources.list && \
        sed -i "s@http://.*security.ubuntu.com@http://repo.huaweicloud.com@g" /etc/apt/sources.list && \
        echo > /etc/apt/apt.conf.d/00skip-verify-peer.conf "Acquire { https::Verify-Peer false }" && \
        apt-get update && \
        apt-get install -y build-essential cmake g++-7 && \
        apt-get clean && \
        mv /etc/apt/sources.list.bak /etc/apt/sources.list && \
        rm /etc/apt/apt.conf.d/00skip-verify-peer.conf
    
    # The default user of the base container image is root.
    # USER root
    
    # Use the PyPI configuration obtained from Huawei Mirrors.
    RUN mkdir -p /root/.pip/
    COPY pip.conf /root/.pip/pip.conf
    
    # Copy the installation files to the /tmp directory in the base container image.
    COPY Miniconda3-py37_4.12.0-Linux-x86_64.sh /tmp
    COPY torch-1.8.1+cu111-cp37-cp37m-linux_x86_64.whl /tmp
    COPY torchvision-0.9.1+cu111-cp37-cp37m-linux_x86_64.whl /tmp
    COPY torchaudio-0.8.1-cp37-cp37m-linux_x86_64.whl /tmp
    COPY openmpi-3.0.0-bin.tar.gz /tmp
    COPY horovod-0.22.1.tar.gz /tmp
    
    # https://conda.io/projects/conda/en/latest/user-guide/install/linux.html#installing-on-linux
    # Install Miniconda3 in the /home/ma-user/miniconda3 directory of the base container image.
    RUN bash /tmp/Miniconda3-py37_4.12.0-Linux-x86_64.sh -b -p /home/ma-user/miniconda3
    
    # Install the Open MPI 3.0.0 file obtained from Horovod v0.22.1.
    # https://github.com/horovod/horovod/blob/v0.22.1/docker/horovod/Dockerfile
    # https://github.com/horovod/horovod/files/1596799/openmpi-3.0.0-bin.tar.gz
    RUN cd /usr/local && \
        tar -zxf /tmp/openmpi-3.0.0-bin.tar.gz && \
        ldconfig && \
        mpirun --version
    
    # Environment variables required for building Horovod with PyTorch
    ENV HOROVOD_NCCL_INCLUDE=/usr/include \
        HOROVOD_NCCL_LIB=/usr/lib/x86_64-linux-gnu \
        HOROVOD_MPICXX_SHOW="/usr/local/openmpi/bin/mpicxx -show" \
        HOROVOD_GPU_OPERATIONS=NCCL \
        HOROVOD_WITH_PYTORCH=1
    
    # Install the .whl files using default Miniconda3 Python environment /home/ma-user/miniconda3/bin/pip.
    RUN cd /tmp && \
        /home/ma-user/miniconda3/bin/pip install --no-cache-dir \
        /tmp/torch-1.8.1+cu111-cp37-cp37m-linux_x86_64.whl \
        /tmp/torchvision-0.9.1+cu111-cp37-cp37m-linux_x86_64.whl \
        /tmp/torchaudio-0.8.1-cp37-cp37m-linux_x86_64.whl
    
    # Build and install horovod-0.22.1.tar.gz using default Miniconda3 Python environment /home/ma-user/miniconda3/bin/pip.
    RUN cd /tmp && \
        /home/ma-user/miniconda3/bin/pip install --no-cache-dir \
        /tmp/horovod-0.22.1.tar.gz
    
    # Create the container image.
    FROM nvidia/cuda:11.1.1-runtime-ubuntu18.04
    
    COPY MLNX_OFED_LINUX-5.4-3.5.8.0-ubuntu18.04-x86_64.tgz /tmp
    
    # Install the vim, cURL, net-tools, MLNX_OFED, and SSH tools obtained from Huawei Mirrors.
    RUN cp -a /etc/apt/sources.list /etc/apt/sources.list.bak && \
        sed -i "s@http://.*archive.ubuntu.com@http://repo.huaweicloud.com@g" /etc/apt/sources.list && \
        sed -i "s@http://.*security.ubuntu.com@http://repo.huaweicloud.com@g" /etc/apt/sources.list && \
        echo > /etc/apt/apt.conf.d/00skip-verify-peer.conf "Acquire { https::Verify-Peer false }" && \
        apt-get update && \
        apt-get install -y vim curl net-tools iputils-ping libfile-find-rule-perl-perl \
        openssh-client openssh-server && \
        ssh -V && \
        mkdir -p /run/sshd && \
        # mlnx ofed
        apt-get install -y python libfuse2 dpatch libnl-3-dev autoconf libnl-route-3-dev pciutils libnuma1 libpci3 m4 libelf1 debhelper automake graphviz bison lsof kmod libusb-1.0-0 swig libmnl0 autotools-dev flex chrpath libltdl-dev && \
        cd /tmp && \
        tar -xvf MLNX_OFED_LINUX-5.4-3.5.8.0-ubuntu18.04-x86_64.tgz && \
        MLNX_OFED_LINUX-5.4-3.5.8.0-ubuntu18.04-x86_64/mlnxofedinstall --user-space-only --basic --without-fw-update -q && \
        cd - && \
        rm -rf /tmp/* && \
        apt-get clean && \
        mv /etc/apt/sources.list.bak /etc/apt/sources.list && \
        rm /etc/apt/apt.conf.d/00skip-verify-peer.conf
    
    # Install the Open MPI 3.0.0 file obtained from Horovod v0.22.1.
    # https://github.com/horovod/horovod/blob/v0.22.1/docker/horovod/Dockerfile
    # https://github.com/horovod/horovod/files/1596799/openmpi-3.0.0-bin.tar.gz
    COPY openmpi-3.0.0-bin.tar.gz /tmp
    RUN cd /usr/local && \
        tar -zxf /tmp/openmpi-3.0.0-bin.tar.gz && \
        ldconfig && \
        mpirun --version
    
    # Add user ma-user (UID = 1000, GID = 100).
    # A user group whose GID is 100 exists in the basic container image. User ma-user can directly run the following command:
    RUN useradd -m -d /home/ma-user -s /bin/bash -g 100 -u 1000 ma-user
    
    # Copy the /home/ma-user/miniconda3 directory from the builder stage to the directory with the same name in the current container image.
    COPY --chown=ma-user:100 --from=builder /home/ma-user/miniconda3 /home/ma-user/miniconda3
    
    # Configure the default user and working directory of the container image.
    USER ma-user
    WORKDIR /home/ma-user
    
    # Configure sshd to support SSH password-free login.
    RUN MA_HOME=/home/ma-user && \
        # setup sshd dir
        mkdir -p ${MA_HOME}/etc && \
        ssh-keygen -f ${MA_HOME}/etc/ssh_host_rsa_key -N '' -t rsa  && \
        mkdir -p ${MA_HOME}/etc/ssh ${MA_HOME}/var/run  && \
        # setup sshd config (listen at {{MY_SSHD_PORT}} port)
        echo "Port {{MY_SSHD_PORT}}\n\
    HostKey ${MA_HOME}/etc/ssh_host_rsa_key\n\
    AuthorizedKeysFile ${MA_HOME}/.ssh/authorized_keys\n\
    PidFile ${MA_HOME}/var/run/sshd.pid\n\
    StrictModes no\n\
    UsePAM no" > ${MA_HOME}/etc/ssh/sshd_config && \
        # generate ssh key
        ssh-keygen -t rsa -f ${MA_HOME}/.ssh/id_rsa -P '' && \
        cat ${MA_HOME}/.ssh/id_rsa.pub >> ${MA_HOME}/.ssh/authorized_keys && \
        # disable ssh host key checking for all hosts
        echo "Host *\n\
      StrictHostKeyChecking no" > ${MA_HOME}/.ssh/config
    
    # Configure the preset environment variables of the container image.
    # Set PYTHONUNBUFFERED to 1 to prevent log loss.
    ENV PATH=/home/ma-user/miniconda3/bin:$PATH \
        PYTHONUNBUFFERED=1

    For details about how to write a Dockerfile, see official Docker documents.

  9. Download the MLNX_OFED installation package.

    Go to Linux Drivers. In the Download tab, choose the installation packages from Current Versions and Archive Versions. In this example, choose Archive Versions, set Version to 5.4-3.5.8.0-LTS, OS Distribution to Ubuntu, OS Distribution Version to Ubuntu 18.04, Architecture to x86_64, and download the MLNX_OFED_LINUX-5.4-3.5.8.0-ubuntu18.04-x86_64.tgz installation package.

  10. Download openmpi-3.0.0-bin.tar.gz.

    Download openmpi-3.0.0-bin.tar.gz from https://github.com/horovod/horovod/files/1596799/openmpi-3.0.0-bin.tar.gz.

  11. Store the pip source file, torch*.whl file, and Miniconda3 installation file in the context folder, which is as follows:
    context
    ├── Dockerfile
    ├── MLNX_OFED_LINUX-5.4-3.5.8.0-ubuntu18.04-x86_64.tgz
    ├── Miniconda3-py37_4.12.0-Linux-x86_64.sh
    ├── horovod-0.22.1.tar.gz
    ├── openmpi-3.0.0-bin.tar.gz
    ├── pip.conf
    ├── torch-1.8.1+cu111-cp37-cp37m-linux_x86_64.whl
    ├── torchaudio-0.8.1-cp37-cp37m-linux_x86_64.whl
    └── torchvision-0.9.1+cu111-cp37-cp37m-linux_x86_64.whl
  12. Create the container image. Run the following command in the directory where the Dockerfile is stored to build the container image horovod-pytorch:0.22.1-1.8.1-ofed-cuda11.1:
    1
    docker build . -t horovod-pytorch:0.22.1-1.8.1-ofed-cuda11.1
    
    The following log shows that the image has been created.
    Successfully tagged horovod-pytorch:0.22.1-1.8.1-ofed-cuda11.1

Step 5 Uploading the Image to SWR

  1. Log in to the SWR console and select a region. It must share the same region with ModelArts. Otherwise, the image cannot be selected.
  2. Click Create Organization in the upper right corner and enter an organization name to create an organization. Customize the organization name. Replace the organization name deep-learning in subsequent commands with the actual organization name.
  3. Click Generate Login Command in the upper right corner to obtain the login command. In this example, the temporary login command is copied.
  4. Log in to the local environment as user root and enter the copied temporary login command.
  5. Upload the image to SWR.
    1. Tag the uploaded image.
      # Replace the region, domain, as well as organization name deep-learning with the actual values.
      sudo docker tag horovod-pytorch:0.22.1-1.8.1-ofed-cuda11.1 swr.{region-id}.{domain}/deep-learning/horovod-pytorch:0.22.1-1.8.1-ofed-cuda11.1
    2. Run the following command to upload the image:
      # Replace the region, domain, as well as organization name deep-learning with the actual values.
      sudo docker push swr.{region-id}.{domain}/deep-learning/horovod-pytorch:0.22.1-1.8.1-ofed-cuda11.1
  6. After the image is uploaded, choose My Images in navigation pane on the left of the SWR console to view the uploaded custom images.

Step 6 Creating a Training Job on ModelArts

  1. Log in to the ModelArts management console, check whether access authorization has been configured for your account. For details, see Configuring Agency Authorization. If you have been authorized using access keys, clear the authorization and configure agency authorization.
  2. In the navigation pane on the left, choose Model Training > Training Jobs. The training job list is displayed by default.
  3. Click Create Training Job. On the page that is displayed, configure parameters and click Next.
    • Created By: Custom algorithms
    • Boot Mode: Custom images
    • Image path: image created in Step 5 Uploading the Image to SWR.
    • Code Directory: directory where the boot script file is stored in OBS, for example, obs://test-modelarts/pytorch/demo-code/. The training code is automatically downloaded to the ${MA_JOB_DIR}/demo-code directory of the training container. demo-code (customizable) is the last-level directory of the OBS path.
    • Boot Command: bash ${MA_JOB_DIR}/demo-code/run_mpi.sh python ${MA_JOB_DIR}/demo-code/pytorch_synthetic_benchmark.py. demo-code (customizable) is the last-level directory of the OBS path.
    • Environment Variable: Click Add Environment Variable and add the environment variable MY_SSHD_PORT=38888.
    • Resource Pool: Select Public resource pools.
    • Resource Type: Select GPU.
    • Compute Nodes: 1 or 2
    • Persistent Log Saving: enabled
    • Job Log Path: OBS path to stored training logs, for example, obs://test-modelarts/pytorch/log/
  4. Confirm the configurations of the training job and click Submit.
  5. Wait until the training job is created.

    After you submit the job creation request, the system will automatically perform operations on the backend, such as downloading the container image and code directory and running the boot command. A training job requires a certain period of time for running. The duration ranges from dozens of minutes to several hours, varying depending on the service logic and selected resources. After the training job is executed, the log similar to the following is output.

    Figure 1 Run logs of training jobs with GPU specifications (one compute node)
    Figure 2 Run logs of training jobs with GPU specifications (two compute nodes)