Updated on 2022-05-09 GMT+08:00

Built-in OpenMPI of the IB Driver

Scenarios

This section describes how to install and use the built-in OpenMPI of the IB driver (for example, 3.0.0rc6).

Prerequisites

You have configured the ECS password-free login.

Procedure

  1. Check whether the IB driver has been installed.

    1. Use PuTTY and a key pair to log in to the ECS.
    2. Run the following command to switch to user root:

      $ sudo su

    3. Run the following command to disable user logout upon system timeout:

      # TMOUT=0

    4. Run the following commands to check whether the IB driver has been installed:

      # rpm -qa | grep mlnx-ofa

      # ls /usr/mpi/gcc/openmpi-3.0.0rc6/bin/mpirun

      Figure 1 Command output indicating that the IB driver has been installed
      • If the preceding two commands contain the returned value shown in Figure 1, the IB driver has been installed. Go to 3.
      • If the returned value is different from that shown in Figure 1, the IB driver is not installed. Go to 2.

  2. Download and install the IB driver.

    Download the required version of InfiniBand NIC driver from the Mellanox official website https://network.nvidia.com/products/infiniband-drivers/linux/mlnx_ofed/ and install the driver by following the instructions provided.

    For example, for an ECS running CentOS 7.3, download the MLNX_OFED_LINUX-4.2-1.2.0.0-rhel7.3-x86_64.tgz installation package and run the following commands to install the IB driver:

    # yum install tk tcl

    # tar -xvf MLNX_OFED_LINUX-4.2-1.2.0.0-rhel7.3-x86_64.tgz

    # cd MLNX_OFED_LINUX-4.2-1.2.0.0-rhel7.3-x86_64/

    # ./mlnxofedinstall

  3. Configure environment variables.

    1. Run the following commands to configure the ~/.bashrc file using the vim editor and add the following content:

      export PATH=$PATH:/usr/mpi/gcc/openmpi-3.0.0rc6/bin

      export LD_LIBRARY_PATH=/usr/mpi/gcc/openmpi-3.0.0rc6/lib64

    2. Run the following command to import MPI environment variables:

      # source ~/.bashrc

    3. Run the following command to check whether the MPI environment variables are correct:

      # which mpirun

      Figure 2 Checking MPI environment variables

      If information shown in Figure 2 is displayed, the environment configuration is correct.

  4. Run the following command to run Intel MPI benchmark on an ECS:

    # mpirun --allow-run-as-root -np 2 /usr/mpi/gcc/openmpi-3.0.0rc6/tests/imb/IMB-MPI1 PingPong

    Information similar to the following is displayed:

    #------------------------------------------------------------
    #    Intel (R) MPI Benchmarks 4.1, MPI-1 part
    #------------------------------------------------------------
    # Date                  : Mon Jul 16 10:11:14 2018
    # Machine               : x86_64
    # System                : Linux
    # Release               : 3.10.0-514.10.2.el7.x86_64
    # Version               : #1 SMP Fri Mar 3 00:04:05 UTC 2017
    # MPI Version           : 3.1
    # MPI Thread Environment:
    
    # New default behavior from Version 3.2 on:
    
    # the number of iterations per message size is cut down
    # dynamically when a certain run time (per message size sample)
    # is expected to be exceeded. Time limit is defined by variable
    # "SECS_PER_SAMPLE" (=> IMB_settings.h)
    # or through the flag => -time
    
    
    # Calling sequence was:
    
    # /usr/mpi/gcc/openmpi-3.0.0rc6/tests/imb/IMB-MPI1 PingPong
    
    # Minimum message length in bytes:   0
    # Maximum message length in bytes:   4194304
    #
    # MPI_Datatype                   :   MPI_BYTE
    # MPI_Datatype for reductions    :   MPI_FLOAT
    # MPI_Op                         :   MPI_SUM
    #
    #
    
    # List of Benchmarks to run:
    
    # PingPong
    
    #---------------------------------------------------
    # Benchmarking PingPong
    # #processes = 2
    #---------------------------------------------------
    #bytes #repetitions      t[usec]   Mbytes/sec
    0         1000         0.24         0.00
    1         1000         0.25         3.89
    2         1000         0.23         8.17
    4         1000         0.23        16.25
    8         1000         0.23        32.48
    16         1000         0.23        65.98
    32         1000         0.26       115.35
    64         1000         0.26       232.92
    128         1000         0.38       320.59
    256         1000         0.44       554.35
    512         1000         0.54       902.98
    1024         1000         0.64      1537.63
    2048         1000         0.85      2298.79
    4096         1000         1.28      3057.93
    8192         1000         2.28      3426.14
    16384         1000         1.41     11052.14
    32768         1000         2.05     15218.39
    65536          640         3.31     18882.34
    131072          320         6.57     19036.27
    262144          160        15.12     16535.96
    524288           80        32.90     15195.74
    1048576           40        64.62     15476.02
    2097152           20       122.83     16282.06
    4194304           10       242.95     16463.95
    
    # All processes entering MPI_Finalize