Help Center/ ModelArts/ Best Practices/ Model Training/ Example: Creating a Custom Image for Training (MPI + CPU/GPU)
Updated on 2024-03-29 GMT+08:00

Example: Creating a Custom Image for Training (MPI + CPU/GPU)

This section describes how to create an image and use the image for training on the ModelArts platform. The AI engine used for training is MPI, and the resources are CPUs or GPUs.

This section applies only to training jobs of the new version.

Scenarios

In this example, create a custom image by writing a Dockerfile on a Linux x86_64 host running the Ubuntu 18.04 operating system.

Objective: Build and install container images of the following software and use the images and CPUs/GPUs for training on ModelArts.

  • ubuntu-18.04
  • cuda-11.1
  • python-3.7.13
  • openmpi-3.0.0

Procedure

Before using a custom image to create a training job, get familiar with Docker and have development experience. The following is the detailed procedure:

  1. Prerequisites
  2. Step 1 Creating an OBS Bucket and Folder
  3. Step 2 Preparing Script Files and Uploading Them to OBS
  4. Step 3 Preparing an Image Server
  5. Step 4 Creating a Custom Image
  6. Step 5 Uploading an Image to SWR
  7. Step 6 Creating a Training Job on ModelArts

Prerequisites

You have registered a Huawei ID and enabled Huawei Cloud services, and the account is not in arrears or frozen.

Step 1 Creating an OBS Bucket and Folder

Create a bucket and folders in OBS for storing the sample dataset and training code. Table 1 lists the folders to be created. Replace the bucket name and folder names in the example with actual names.

For details about how to create an OBS bucket and folder, see Creating a Bucket and Creating a Folder.

Ensure that the OBS directory you use and ModelArts are in the same region.

Table 1 Folder to create

Name

Description

obs://test-modelarts/mpi/demo-code/

Stores the MPI boot script and training script file.

obs://test-modelarts/mpi/log/

Stores training log files.

Step 2 Preparing Script Files and Uploading Them to OBS

Prepare the MPI boot script run_mpi.sh and training script mpi-verification.py and upload them to the obs://test-modelarts/mpi/demo-code/ folder of the OBS bucket.

  • The content of the MPI boot script run_mpi.sh is as follows:
    #!/bin/bash
    MY_HOME=/home/ma-user
    
    MY_SSHD_PORT=${MY_SSHD_PORT:-"38888"}
    
    MY_TASK_INDEX=${MA_TASK_INDEX:-${VC_TASK_INDEX:-${VK_TASK_INDEX}}}
    
    MY_MPI_SLOTS=${MY_MPI_SLOTS:-"${MA_NUM_GPUS}"}
    
    MY_MPI_TUNE_FILE="${MY_HOME}/env_for_user_process"
    
    if [ -z ${MY_MPI_SLOTS} ]; then
        echo "[run_mpi] MY_MPI_SLOTS is empty, set it be 1"
        MY_MPI_SLOTS="1"
    fi
    
    printf "MY_HOME: ${MY_HOME}\nMY_SSHD_PORT: ${MY_SSHD_PORT}\nMY_MPI_BTL_TCP_IF: ${MY_MPI_BTL_TCP_IF}\nMY_TASK_INDEX: ${MY_TASK_INDEX}\nMY_MPI_SLOTS: ${MY_MPI_SLOTS}\n"
    
    env | grep -E '^MA_|SHARED_|^S3_|^PATH|^VC_WORKER_|^SCC|^CRED' | grep -v '=$' > ${MY_MPI_TUNE_FILE}
    # add -x to each line
    sed -i 's/^/-x /' ${MY_MPI_TUNE_FILE}
    
    sed -i "s|{{MY_SSHD_PORT}}|${MY_SSHD_PORT}|g" ${MY_HOME}/etc/ssh/sshd_config
    
    # start sshd service
    bash -c "$(which sshd) -f ${MY_HOME}/etc/ssh/sshd_config"
    
    # confirm the sshd is up
    netstat -anp | grep LIS | grep ${MY_SSHD_PORT}
    
    if [ $MY_TASK_INDEX -eq 0 ]; then
        # generate the hostfile of mpi
        for ((i=0; i<$MA_NUM_HOSTS; i++))
        do
            eval hostname=${MA_VJ_NAME}-${MA_TASK_NAME}-${i}.${MA_VJ_NAME}
            echo "[run_mpi] hostname: ${hostname}"
    
            ip=""
            while [ -z "$ip" ]; do
                ip=$(ping -c 1 ${hostname} | grep "PING" | sed -E 's/PING .* .([0-9.]+). .*/\1/g')
                sleep 1
            done
            echo "[run_mpi] resolved ip: ${ip}"
    
            # test the sshd is up
            while :
            do
                if [ cat < /dev/null >/dev/tcp/${ip}/${MY_SSHD_PORT} ]; then
                    break
                fi
                sleep 1
            done
    
            echo "[run_mpi] the sshd of ip ${ip} is up"
    
            echo "${ip} slots=$MY_MPI_SLOTS" >> ${MY_HOME}/hostfile
        done
    
        printf "[run_mpi] hostfile:\n`cat ${MY_HOME}/hostfile`\n"
    fi
    
    RET_CODE=0
    
    if [ $MY_TASK_INDEX -eq 0 ]; then
    
        echo "[run_mpi] start exec command time: "$(date +"%Y-%m-%d-%H:%M:%S")
    
        np=$(( ${MA_NUM_HOSTS} * ${MY_MPI_SLOTS} ))
    
        echo "[run_mpi] command: mpirun -np ${np} -hostfile ${MY_HOME}/hostfile -mca plm_rsh_args \"-p ${MY_SSHD_PORT}\" -tune ${MY_MPI_TUNE_FILE} ... $@"
    
        # execute mpirun at worker-0
        # mpirun
        mpirun \
            -np ${np} \
            -hostfile ${MY_HOME}/hostfile \
            -mca plm_rsh_args "-p ${MY_SSHD_PORT}" \
            -tune ${MY_MPI_TUNE_FILE} \
            -bind-to none -map-by slot \
            -x NCCL_DEBUG -x NCCL_SOCKET_IFNAME -x NCCL_IB_HCA -x NCCL_IB_TIMEOUT -x NCCL_IB_GID_INDEX -x NCCL_IB_TC \
            -x HOROVOD_MPI_THREADS_DISABLE=1 \
            -x PATH -x LD_LIBRARY_PATH \
            -mca pml ob1 -mca btl ^openib -mca plm_rsh_no_tree_spawn true \
            "$@"
    
        RET_CODE=$?
    
        if [ $RET_CODE -ne 0 ]; then
            echo "[run_mpi] exec command failed, exited with $RET_CODE"
        else
            echo "[run_mpi] exec command successfully, exited with $RET_CODE"
        fi
    
        # stop 1...N worker by killing the sleep proc
        sed -i '1d' ${MY_HOME}/hostfile
        if [ `cat ${MY_HOME}/hostfile | wc -l` -ne 0 ]; then
            echo "[run_mpi] stop 1 to (N - 1) worker by killing the sleep proc"
    
            sed -i 's/${MY_MPI_SLOTS}/1/g' ${MY_HOME}/hostfile
            printf "[run_mpi] hostfile:\n`cat ${MY_HOME}/hostfile`\n"
    
            mpirun \
            --hostfile ${MY_HOME}/hostfile \
            --mca plm_rsh_args "-p ${MY_SSHD_PORT}" \
            -x PATH -x LD_LIBRARY_PATH \
            pkill sleep \
            > /dev/null 2>&1
        fi
    
        echo "[run_mpi] exit time: "$(date +"%Y-%m-%d-%H:%M:%S")
    else
        echo "[run_mpi] the training log is in worker-0"
        sleep 365d
        echo "[run_mpi] exit time: "$(date +"%Y-%m-%d-%H:%M:%S")
    fi
    
    exit $RET_CODE

    The script run_mpi.sh uses LF line endings. If CRLF line endings are used, executing the training job will fail, and the error "$'\r': command not found" will be displayed in logs.

  • The content of the training script mpi-verification.py is as follows:
    import os
    import socket
    
    if __name__ == '__main__':
        print(socket.gethostname())
    
        # https://www.open-mpi.org/faq/?category=running#mpi-environmental-variables
        print('OMPI_COMM_WORLD_SIZE: ' + os.environ['OMPI_COMM_WORLD_SIZE'])
        print('OMPI_COMM_WORLD_RANK: ' + os.environ['OMPI_COMM_WORLD_RANK'])
        print('OMPI_COMM_WORLD_LOCAL_RANK: ' + os.environ['OMPI_COMM_WORLD_LOCAL_RANK'])

Step 3 Preparing an Image Server

Obtain a Linux x86_64 server running Ubuntu 18.04. Either an ECS or your local PC will do.

For details about how to purchase an ECS, see Purchasing and Logging In to a Linux ECS. Select a public image. An Ubuntu 18.04 image is recommended.
Figure 1 Creating an ECS using a public image (x86)

Step 4 Creating a Custom Image

Objective: Build and install container images of the following software and use the ModelArts training service to run the images.

  • ubuntu-18.04
  • cuda-11.1
  • python-3.7.13
  • openmpi-3.0.0

The following describes how to create a custom image by writing a Dockerfile.

  1. Install Docker.

    The following uses the Linux x86_64 OS as an example to describe how to obtain a Docker installation package. For more details, see Docker official documents. Run the following commands to install Docker:

    curl -fsSL get.docker.com -o get-docker.sh
    sh get-docker.sh

    If the docker images command is executed, Docker has been installed. In this case, skip this step.

  2. Check the Docker engine version. Run the following command:
    docker version | grep -A 1 Engine
    The following information is displayed:
     Engine:
      Version:          18.09.0

    You are advised to use Docker Engine of this version or later to create a custom image.

  3. Create a folder named context.
    mkdir -p context
  4. Download the Miniconda3 installation file.

    Download the Miniconda3 py37 4.12.0 installation file (Python 3.7.13) from https://repo.anaconda.com/miniconda/Miniconda3-py37_4.12.0-Linux-x86_64.sh.

  5. Download the openmpi 3.0.0 installation file.

    Download the openmpi 3.0.0 file edited using Horovod v0.22.1 from https://github.com/horovod/horovod/files/1596799/openmpi-3.0.0-bin.tar.gz.

  6. Store the Miniconda3 and openmpi 3.0.0 files in the context folder. The following shows the context folder:
    context
    ├── Miniconda3-py37_4.12.0-Linux-x86_64.sh
    └── openmpi-3.0.0-bin.tar.gz
  7. Write the Dockerfile of the container image.
    Create an empty file named Dockerfile in the context folder and write the following content to the file:
    # The host must be connected to the public network for creating a container image.
    
    # Basic container image at https://github.com/NVIDIA/nvidia-docker/wiki/CUDA
    #
    # https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds
    # require Docker Engine >= 17.05
    #
    # builder stage
    FROM nvidia/cuda:11.1.1-runtime-ubuntu18.04 AS builder
    
    # The default user of the basic container image is root.
    # USER root
    
    # Copy the Miniconda3 (Python 3.7.13) installation files to the /tmp directory of the basic container image.
    COPY Miniconda3-py37_4.12.0-Linux-x86_64.sh /tmp
    
    # Install Miniconda3 to the /home/ma-user/miniconda3 directory of the basic container image.
    # https://conda.io/projects/conda/en/latest/user-guide/install/linux.html#installing-on-linux
    RUN bash /tmp/Miniconda3-py37_4.12.0-Linux-x86_64.sh -b -p /home/ma-user/miniconda3
    
    # Create the final container image.
    FROM nvidia/cuda:11.1.1-runtime-ubuntu18.04
    
    # Install vim, cURL, net-tools, and the SSH tool in Huawei Mirrors.
    RUN cp -a /etc/apt/sources.list /etc/apt/sources.list.bak && \
        sed -i "s@http://.*archive.ubuntu.com@http://repo.huaweicloud.com@g" /etc/apt/sources.list && \
        sed -i "s@http://.*security.ubuntu.com@http://repo.huaweicloud.com@g" /etc/apt/sources.list && \
        echo > /etc/apt/apt.conf.d/00skip-verify-peer.conf "Acquire { https::Verify-Peer false }" && \
        apt-get update && \
        apt-get install -y vim curl net-tools iputils-ping \
        openssh-client openssh-server && \
        ssh -V && \
        mkdir -p /run/sshd && \
        apt-get clean && \
        mv /etc/apt/sources.list.bak /etc/apt/sources.list && \
        rm /etc/apt/apt.conf.d/00skip-verify-peer.conf
    
    # Install the Open MPI 3.0.0 file written using Horovod v0.22.1.
    # https://github.com/horovod/horovod/blob/v0.22.1/docker/horovod/Dockerfile
    # https://github.com/horovod/horovod/files/1596799/openmpi-3.0.0-bin.tar.gz
    COPY openmpi-3.0.0-bin.tar.gz /tmp
    RUN cd /usr/local && \
        tar -zxf /tmp/openmpi-3.0.0-bin.tar.gz && \
        ldconfig && \
        mpirun --version
    
    # Add user ma-user (UID = 1000, GID = 100).
    # A user group whose GID is 100 of the basic container image exists. User ma-user can directly use it.
    RUN useradd -m -d /home/ma-user -s /bin/bash -g 100 -u 1000 ma-user
    
    # Copy the /home/ma-user/miniconda3 directory from the builder stage to the directory with the same name in the current container image.
    COPY --chown=ma-user:100 --from=builder /home/ma-user/miniconda3 /home/ma-user/miniconda3
    
    # Configure the preset environment variables of the container image.
    # Set PYTHONUNBUFFERED to 1 to avoid log loss.
    ENV PATH=$PATH:/home/ma-user/miniconda3/bin \
        PYTHONUNBUFFERED=1
    
    # Set the default user and working directory of the container image.
    USER ma-user
    WORKDIR /home/ma-user
    
    # Configure sshd to support SSH password-free login.
    RUN MA_HOME=/home/ma-user && \
        # setup sshd dir
        mkdir -p ${MA_HOME}/etc && \
        ssh-keygen -f ${MA_HOME}/etc/ssh_host_rsa_key -N '' -t rsa  && \
        mkdir -p ${MA_HOME}/etc/ssh ${MA_HOME}/var/run  && \
        # setup sshd config (listen at {{MY_SSHD_PORT}} port)
        echo "Port {{MY_SSHD_PORT}}\n\
    HostKey ${MA_HOME}/etc/ssh_host_rsa_key\n\
    AuthorizedKeysFile ${MA_HOME}/.ssh/authorized_keys\n\
    PidFile ${MA_HOME}/var/run/sshd.pid\n\
    StrictModes no\n\
    UsePAM no" > ${MA_HOME}/etc/ssh/sshd_config && \
        # generate ssh key
        ssh-keygen -t rsa -f ${MA_HOME}/.ssh/id_rsa -P '' && \
        cat ${MA_HOME}/.ssh/id_rsa.pub >> ${MA_HOME}/.ssh/authorized_keys && \
        # disable ssh host key checking for all hosts
        echo "Host *\n\
      StrictHostKeyChecking no" > ${MA_HOME}/.ssh/config

    For details about how to write a Dockerfile, see Docker official documents.

  8. Verify that the Dockerfile has been created. The following shows the context folder:
    context
    ├── Dockerfile
    ├── Miniconda3-py37_4.12.0-Linux-x86_64.sh
    └── openmpi-3.0.0-bin.tar.gz
  9. Create the container image. Run the following command in the directory where the Dockerfile is stored to build the container image mpi:3.0.0-cuda11.1:
    1
    docker build . -t mpi:3.0.0-cuda11.1
    
    The following log information displayed during image creation indicates that the image has been created.
    naming to docker.io/library/mpi:3.0.0-cuda11.1

Step 5 Uploading an Image to SWR

  1. Log in to the SWR console and select the target region.
    Figure 2 SWR console
  2. Click Create Organization in the upper right corner and enter an organization name to create an organization. Customize the organization name. Replace the organization name deep-learning in subsequent commands with the actual organization name.
    Figure 3 Creating an organization
  3. Click Generate Login Command in the upper right corner to obtain a login command.
    Figure 4 Login Command
  4. Log in to the local environment as the root user and enter the login command.
  5. Upload the image to SWR.
    1. Run the following command to tag the uploaded image:
      #Replace the region and domain information with the actual values, and replace the organization name deep-learning with your custom value.
      sudo docker tag mpi:3.0.0-cuda11.1 swr.cn-north-4.myhuaweicloud.com/deep-learning/mpi:3.0.0-cuda11.1
    2. Run the following command to upload the image:
      #Replace the region and domain information with the actual values, and replace the organization name deep-learning with your custom value.
      sudo docker push swr.cn-north-4.myhuaweicloud.com/deep-learning/mpi:3.0.0-cuda11.1
  6. After the image is uploaded, choose My Images on the left navigation pane of the SWR console to view the uploaded custom images.

    swr.cn-north-4.myhuaweicloud.com/deep-learning/mpi:3.0.0-cuda11.1 is the SWR URL of the custom image.

Step 6 Creating a Training Job on ModelArts

  1. Log in to the ModelArts management console, check whether access authorization has been configured for your account. For details, see Configuring Agency Authorization. If you have been authorized using access keys, clear the authorization and configure agency authorization.
  2. Log in to the ModelArts management console. In the left navigation pane, choose Training Management > Training Jobs (New).
  3. On the Create Training Job page, configure parameters and click Submit.
    • Created By: Custom algorithms
    • Boot Mode: Custom images
    • Image path: swr.cn-north-4.myhuaweicloud.com/deep-learning/mpi:3.0.0-cuda11.1
    • Code Directory: OBS path to the boot script, for example, obs://test-modelarts/mpi/demo-code/.
    • Boot Command: bash ${MA_JOB_DIR}/demo-code/run_mpi.sh python ${MA_JOB_DIR}/demo-code/mpi-verification.py
    • Environment Variable: Add MY_SSHD_PORT = 38888.
      Figure 5 Adding an environment variable
    • Resource Pool: Public resource pools
    • Resource Type: Select GPU.
    • Compute Nodes: Enter 1 or 2.
    • Persistent Log Saving: enabled
    • Job Log Path: Set this parameter to the OBS path for storing training logs, for example, obs://test-modelarts/mpi/log/.
  4. Check the parameters of the training job and click Submit.
  5. Wait until the training job is completed.

    After a training job is created, the operations such as container image downloading, code directory downloading, and boot command execution are automatically performed in the backend. Generally, the training duration ranges from dozens of minutes to several hours, depending on the training procedure and selected resources. After the training job is executed, the log similar to the following is output.

    Figure 6 Run logs of worker-0 with one compute node and GPU specifications

    Set Compute Nodes to 2 and run the training job. Figure 7 and Figure 8 show the log information.

    Figure 7 Run logs of worker-0 with two compute nodes and GPU specifications
    Figure 8 Run logs of worker-1 with two compute nodes and GPU specifications