Creating a Custom Training Image (Tensorflow + GPU)
This section describes how to create an image and use it for training on ModelArts. The AI engine used in the image is TensorFlow, and the resources used for training are GPUs.
This section applies only to training jobs of the new version.
Scenario
In this example, write a Dockerfile to create a custom image on a Linux x86_64 server running Ubuntu 18.04.
Create a container image with the following configurations and use the image to create a GPU-powered training job on ModelArts:
- ubuntu-18.04
- cuda-11.2
- python-3.7.13
- mlnx ofed-5.4
- tensorflow gpu-2.10.0
Procedure
Before using a custom image to create a training job, you need to be familiar with Docker and have development experience.
- Prerequisites
- Step 1 Creating an OBS Bucket and Folder
- Step 2 Creating a Dataset and Uploading It to OBS
- Step 3 Preparing the Training Script and Uploading It to OBS
- Step 4 Preparing a Server
- Step 5 Creating a Custom Image
- Step 6 Uploading the Image to SWR
- Step 7 Creating a Training Job on ModelArts
Step 1 Creating an OBS Bucket and Folder
Create a bucket and folders in OBS for storing the sample dataset and training code. Table 1 lists the folders to be created. Replace the bucket name and folder names in the example with actual names.
For details about how to create an OBS bucket and folder, see Creating a Bucket and Creating a Folder.
Ensure that the OBS directory you use and ModelArts are in the same region.
Step 2 Creating a Dataset and Uploading It to OBS
Download mnist.npz from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz, and upload it to obs://test-modelarts/tensorflow/data/ in the OBS bucket.
Step 3 Preparing the Training Script and Uploading It to OBS
Obtain the training script mnist.py and upload it to obs://test-modelarts/tensorflow/code/ in the OBS bucket.
mnist.py is as follows:
import argparse import tensorflow as tf parser = argparse.ArgumentParser(description='TensorFlow quick start') parser.add_argument('--data_url', type=str, default="./Data", help='path where the dataset is saved') args = parser.parse_args() mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data(args.data_url) x_train, x_test = x_train / 255.0, x_test / 255.0 model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.2), tf.keras.layers.Dense(10) ]) loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) model.compile(optimizer='adam', loss=loss_fn, metrics=['accuracy']) model.fit(x_train, y_train, epochs=5)
Step 4 Preparing a Server
Obtain a Linux x86_64 server running Ubuntu 18.04. Either an ECS or your local PC will do.
For details about how to purchase an ECS, see Purchasing and Logging In to a Linux ECS. Set CPU Architecture to x86 and Image to Public image. Ubuntu 18.04 images are recommended.
Step 5 Creating a Custom Image
Create a container image with the following configurations and use the image to create a training job on ModelArts:
- ubuntu-18.04
- cuda-11.1
- python-3.7.13
- mlnx ofed-5.4
- mindspore gpu-1.8.1
The following describes how to create a custom image by writing a Dockerfile.
- Install Docker.
The following uses the Linux x86_64 OS as an example to describe how to obtain the Docker installation package. For details about how to install Docker, see official Docker documents. Run the following commands to install Docker:
curl -fsSL get.docker.com -o get-docker.sh sh get-docker.sh
If the docker images command is executed, Docker has been installed. In this case, skip this step.
- Check the Docker engine version. Run the following command:
docker version | grep -A 1 Engine
The following information is displayed:Engine: Version: 18.09.0
Use the Docker engine of the preceding version or later to create a custom image.
- Create a folder named context.
mkdir -p context
- Obtain the pip.conf file. In this example, the pip source provided by Huawei Mirrors is used, which is as follows:
[global] index-url = https://repo.huaweicloud.com/repository/pypi/simple trusted-host = repo.huaweicloud.com timeout = 120
To obtain pip.conf, go to Huawei Mirrors at https://mirrors.huaweicloud.com/home and search for pypi.
- Download tensorflow_gpu-2.10.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.
Download tensorflow_gpu-2.10.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl from https://pypi.org/project/tensorflow-gpu/2.10.0/#files.
- Download the Miniconda3 installation file.
Download the Miniconda3 py37 4.12.0 installation file (Python 3.7.13) from https://repo.anaconda.com/miniconda/Miniconda3-py37_4.12.0-Linux-x86_64.sh.
- Write the container image Dockerfile.
Create an empty file named Dockerfile in the context folder and copy the following content to the file:
# The server on which the container image is created must access the Internet. # Base container image at https://github.com/NVIDIA/nvidia-docker/wiki/CUDA # # https://docs.docker.com/develop/develop-images/multistage-build/#use-multi-stage-builds # require Docker Engine >= 17.05 # # builder stage FROM nvidia/cuda:11.2.2-cudnn8-runtime-ubuntu18.04 AS builder # The default user of the base container image is root. # USER root # Use the PyPI configuration obtained from Huawei Mirrors. RUN mkdir -p /root/.pip/ COPY pip.conf /root/.pip/pip.conf # Copy the installation files to the /tmp directory in the base container image. COPY Miniconda3-py37_4.12.0-Linux-x86_64.sh /tmp COPY tensorflow_gpu-2.10.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl /tmp # https://conda.io/projects/conda/en/latest/user-guide/install/linux.html#installing-on-linux # Install Miniconda3 in the /home/ma-user/miniconda3 directory of the base container image. RUN bash /tmp/Miniconda3-py37_4.12.0-Linux-x86_64.sh -b -p /home/ma-user/miniconda3 # Install the TensorFlow .whl file using default Miniconda3 Python environment /home/ma-user/miniconda3/bin/pip. RUN cd /tmp && \ /home/ma-user/miniconda3/bin/pip install --no-cache-dir \ /tmp/tensorflow_gpu-2.10.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl RUN cd /tmp && \ /home/ma-user/miniconda3/bin/pip install --no-cache-dir keras==2.10.0 # Create the container image. FROM nvidia/cuda:11.2.2-cudnn8-runtime-ubuntu18.04 COPY MLNX_OFED_LINUX-5.4-3.5.8.0-ubuntu18.04-x86_64.tgz /tmp # Install the vim, cURL, net-tools, and MLNX_OFED tools obtained from Huawei Mirrors. RUN cp -a /etc/apt/sources.list /etc/apt/sources.list.bak && \ sed -i "s@http://.*archive.ubuntu.com@http://repo.huaweicloud.com@g" /etc/apt/sources.list && \ sed -i "s@http://.*security.ubuntu.com@http://repo.huaweicloud.com@g" /etc/apt/sources.list && \ echo > /etc/apt/apt.conf.d/00skip-verify-peer.conf "Acquire { https::Verify-Peer false }" && \ apt-get update && \ apt-get install -y vim curl net-tools iputils-ping && \ # mlnx ofed apt-get install -y python libfuse2 dpatch libnl-3-dev autoconf libnl-route-3-dev pciutils libnuma1 libpci3 m4 libelf1 debhelper automake graphviz bison lsof kmod libusb-1.0-0 swig libmnl0 autotools-dev flex chrpath libltdl-dev && \ cd /tmp && \ tar -xvf MLNX_OFED_LINUX-5.4-3.5.8.0-ubuntu18.04-x86_64.tgz && \ MLNX_OFED_LINUX-5.4-3.5.8.0-ubuntu18.04-x86_64/mlnxofedinstall --user-space-only --basic --without-fw-update -q && \ cd - && \ rm -rf /tmp/* && \ apt-get clean && \ mv /etc/apt/sources.list.bak /etc/apt/sources.list && \ rm /etc/apt/apt.conf.d/00skip-verify-peer.conf # Add user ma-user (UID = 1000, GID = 100). # A user group whose GID is 100 exists in the base container image. User ma-user can directly run the following command: RUN useradd -m -d /home/ma-user -s /bin/bash -g 100 -u 1000 ma-user # Copy the /home/ma-user/miniconda3 directory from the builder stage to the directory with the same name in the current container image. COPY --chown=ma-user:100 --from=builder /home/ma-user/miniconda3 /home/ma-user/miniconda3 # Configure the default user and working directory of the container image. USER ma-user WORKDIR /home/ma-user # Configure the preset environment variables of the container image. # Set PYTHONUNBUFFERED to 1 to prevent log loss. ENV PATH=/home/ma-user/miniconda3/bin:$PATH \ LD_LIBRARY_PATH=/usr/local/cuda/lib64:/usr/lib/x86_64-linux-gnu:$LD_LIBRARY_PATH \ PYTHONUNBUFFERED=1
For details about how to write a Dockerfile, see official Docker documents.
- Download MLNX_OFED_LINUX-5.4-3.5.8.0-ubuntu18.04-x86_64.tgz.
Go to Linux Drivers. In the Download tab, set Version to 5.4-3.5.8.0-LTS, OS Distribution Version to Ubuntu 18.04, Architecture to x86_64, and download MLNX_OFED_LINUX-5.4-3.5.8.0-ubuntu18.04-x86_64.tgz.
- Store the Dockerfile and Miniconda3 installation file in the context folder, which is as follows:
context ├── Dockerfile ├── MLNX_OFED_LINUX-5.4-3.5.8.0-ubuntu18.04-x86_64.tgz ├── Miniconda3-py37_4.12.0-Linux-x86_64.sh ├── pip.conf └── tensorflow_gpu-2.10.0-cp37-cp37m-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Create the container image. Run the following command in the directory where the Dockerfile is stored to build the container image tensorflow:2.10.0-ofed-cuda11.2:
1
docker build . -t tensorflow:2.10.0-ofed-cuda11.2
The following log shows that the image has been created.Successfully tagged tensorflow:2.10.0-ofed-cuda11.2
Step 6 Uploading the Image to SWR
- Log in to the SWR console and select a region. It must share the same region with ModelArts. Otherwise, the image cannot be selected.
- Click Create Organization in the upper right corner and enter an organization name. Customize the organization name. Replace the organization name deep-learning in subsequent commands with the actual organization name.
- Click Generate Login Command in the upper right corner to obtain the login command. In this example, the temporary login command is copied.
- Log in to the local environment as user root and enter the copied temporary login command.
- Upload the image to SWR.
- Tag the uploaded image.
# Replace the region, domain, as well as organization name deep-learning with the actual values. sudo docker tag tensorflow:2.10.0-ofed-cuda11.2 swr.{region-id}.{domain}/deep-learning/tensorflow:2.10.0-ofed-cuda11.2
- Run the following command to upload the image:
# Replace the region, domain, as well as organization name deep-learning with the actual values. sudo docker push swr.{region-id}.{domain}/deep-learning/tensorflow:2.10.0-ofed-cuda11.2
- Tag the uploaded image.
- After the image is uploaded, choose My Images in navigation pane on the left of the SWR console to view the uploaded custom images.
Step 7 Creating a Training Job on ModelArts
- Log in to the ModelArts management console, check whether access authorization has been configured for your account. For details, see Configuring Agency Authorization for ModelArts with One Click. If you have been authorized using access keys, clear the authorization and configure agency authorization.
- In the navigation pane on the left, choose Model Training > Training Jobs. The training job list is displayed by default.
- Click Create Training Job. On the page that is displayed, configure parameters and click Next.
- Created By: Custom algorithms
- Boot Mode: Custom images
- Image path: image created in Step 5 Creating a Custom Image.
- Code Directory: directory where the boot script file is stored in OBS, for example, obs://test-modelarts/tensorflow/code/. The training code is automatically downloaded to the ${MA_JOB_DIR}/code directory of the training container. code (customizable) is the last-level directory of the OBS path.
- Boot Command: python ${MA_JOB_DIR}/code/mnist.py. code (customizable) is the last-level directory of the OBS path.
- Training Input: Click Add Training Input. Enter data_path for the name, select the OBS path to mnist.npz, for example, obs://test-modelarts/tensorflow/data/mnist.npz, and set Obtained from to Hyperparameters.
- Resource Pool: Select Public resource pools.
- Resource Type: Select GPU.
- Compute Nodes: Enter 1.
- Persistent Log Saving: enabled
- Job Log Path: OBS path to stored training logs, for example, obs://test-modelarts/mindspore-gpu/log/
- Confirm the configurations of the training job and click Submit.
- Wait until the training job is created.
After you submit the job creation request, the system will automatically perform operations on the backend, such as downloading the container image and code directory and running the boot command. A training job requires a certain period of time for running. The duration ranges from dozens of minutes to several hours, varying depending on the service logic and selected resources. After the training job is executed, the log similar to the following is output.
Figure 1 Run logs of training jobs with GPU specifications
Feedback
Was this page helpful?
Provide feedbackThank you very much for your feedback. We will continue working to improve the documentation.See the reply and handling status in My Cloud VOC.
For any further questions, feel free to contact us through the chatbot.
Chatbot